Objectives & Scope
ISTAI26 brings together researchers, practitioners, and industry stakeholders working at the intersection of sensor technologies and artificial intelligence systems. The conference focuses on end-to-end intelligent sensing pipelines, from data acquisition and signal processing to learning, deployment, and real-world validation.
ISTAI26 welcomes contributions spanning novel sensor hardware and multi-modal sensing, robust learning from sensor data, edge-AI and IoT architectures, cyber-physical integration, and trustworthy AI principles including reliability, safety, security, and standard compliance. By encouraging both methodological advances and field-tested prototypes, the conference aims to accelerate scalable, efficient, and dependable sensor-driven AI solutions across diverse application domains.
Key Objectives
- Advancing state-of-the-art research and emerging directions in sensor technology and AI-driven intelligent systems
- Promoting holistic, system-level approaches connecting sensing, processing, learning, and deployment
- Strengthening innovation in multi-sensor fusion, real-time inference, and edge intelligence
- Supporting reproducible research through benchmarking, evaluation, and transparent experimentation
- Expanding applications in healthcare, smart agriculture, Industry 4.0/5.0, smart cities, energy, and environmental monitoring
- Fostering academia–industry collaboration for deployable and standards-aware solutions
Conference Tracks
Track 1. Sensor Technologies and Smart Sensing Systems
- Novel sensors: MEMS, biosensors, chemical/gas sensors, optical, thermal, RF sensors
- Sensor design, calibration, error modeling, noise characterization
- Multi-sensor systems, sensor networks, synchronization and time-stamping
- Energy-efficient sensing, low-power hardware, energy harvesting solutions
- Wearable and mobile sensing, embedded device integration
- Sensor-enabled digital twins and monitoring infrastructures
Track 2. Signal Processing, Data Fusion, and Learning from Sensor Data
- Signal processing: filtering, spectral analysis, time-series modeling
- Feature engineering, representation learning, self-supervised learning
- Multi-modal learning (audio, IMU, vision, LiDAR, radar)
- Sensor fusion: Bayesian methods, Kalman/particle filtering, deep fusion
- Anomaly detection, fault diagnosis, predictive maintenance
- Learning under limited data: few-shot learning, transfer learning
- Data quality: missing data handling, noise robustness, labeling practices
Track 3. Perception, Vision, and Multimodal Signal Intelligence
- Computer vision: detection, segmentation, tracking, event recognition
- Vision Transformers and foundation models
- Medical, industrial, and agricultural vision systems
- Multimodal perception: vision + IMU + radar + LiDAR + audio
- Time-series analytics: forecasting, event detection, change-point detection
- Audio and acoustic sensing applications
- Radar/LiDAR processing, SLAM, point cloud learning
- Signal enhancement: denoising, super-resolution
- Robustness to noise, domain shift, and sensor artifacts
- Real-time perception pipelines on edge devices
- Benchmarking and evaluation protocols
Track 4. Intelligent Learning Systems: Deep Learning, Reinforcement Learning, and Trustworthy AI
- Deep learning architectures: CNNs, Transformers, GNNs, diffusion models
- Reinforcement learning: safe RL, multi-agent RL, RL+MPC
- Learning under constraints: small data, continual learning
- Optimization and AutoML techniques
- Probabilistic ML and uncertainty estimation
- Trustworthy AI: explainability, fairness, robustness
- Federated and privacy-preserving learning
- Anomaly detection and predictive maintenance
- Causal learning and decision intelligence
- MLOps: monitoring, reproducibility, model governance
- Human-in-the-loop and responsible AI deployment
- Real-world validation and benchmarking