Google Research Introduces SensorLM for Wearable Data Interpretation
Google Research has introduced SensorLM, a new family of sensor-language foundation models, announced on their website. SensorLM is trained on nearly 60 million hours of multimodal sensor data, aiming to bridge the gap between raw sensor signals and their real-world meanings.
The models are pre-trained on data from over 103,000 individuals, collected from devices like Fitbit and Pixel Watch. This extensive dataset allows SensorLM to generate human-readable descriptions from complex sensor data, offering new capabilities in personalized health insights.
SensorLM employs a combination of contrastive learning and generative pre-training to interpret and generate text from sensor data. This enables zero-shot classification of activities and enhances cross-modal retrieval, allowing users to query sensor data using natural language.
The research highlights SensorLM's potential in advancing human activity recognition and healthcare applications, setting a new standard in sensor data understanding.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like Life AI Weekly or Daily AI Brief.
Also, consider following us on social media:
More from: Life Sciences
Subscribe to Life AI Weekly
Weekly coverage of AI applications in healthcare, drug development, biotechnology research, and genomics breakthroughs.
Market report
2025 Generative AI in Professional Services Report
This report by Thomson Reuters explores the integration and impact of generative AI technologies, such as ChatGPT and Microsoft Copilot, within the professional services sector. It highlights the growing adoption of GenAI tools across industries like legal, tax, accounting, and government, and discusses the challenges and opportunities these technologies present. The report also examines professionals' perceptions of GenAI and the need for strategic integration to maximize its value.
Read more