Spatial Intelligence Laboratory

Advancing Spatial
Intelligence
for
dense worlds.

Developing visual foundation models that synthesize 3D geometry and physical dynamics into actionable intelligence for complex, unstructured environments.

Core Projects

High-fidelity spatial reasoning & multi-modal integration.

P-01 // Urban Density Intelligence

Dense World

A city-scale intelligence initiative for high-density environments. Dense World models crowd flow, mixed mobility, and infrastructure constraints to generate robust forecasting and planning signals where standard low-density assumptions fail.

Dense urban mobility and crowd dynamics
Neural Architecture
P-02 // JEPA World Models

FactorJEPA

A Joint-Embedding Predictive Architecture optimized for the extreme density of South Asian urban environments. FactorJEPA learns invariant representations of world dynamics by factoring geometry, semantics, and temporal flow into discrete, manageable latent spaces.

Research Paper arrow_outward Active Inference 0.94
P-03 // Embodied Intelligence

PragyaVLA

Our Vision-Language-Action foundation. Featuring a novel Locomotion-Aware Chain-of-Thought (LA-CoT) mechanism, PragyaVLA bridges high-level linguistic reasoning with low-level torque control for dexterous robots in domestic and industrial settings.

Technical Specs arrow_outward Cross-Modal Latency 12ms
Embodied AI
P-04 // Defensive Motion

KalariSena

A framework for fall-resilient humanoid motion inspired by Kalaripayattu, the world's oldest martial art. KalariSena uses high-frequency motion tracking to enable autonomous systems to recover from physical disruptions and navigate precarious terrains with biological fluidity.

Motion Library arrow_outward Resilience Index 9.2/10
Motion Tracking
Network Mesh
P-05 // AI Governance

Kalam Protocol

The regulatory and safety backbone of the ecosystem. Emphasizing deterministic alignment, safety guardrails, and AI policy, the Kalam Protocol ensures that autonomous systems operate within ethical boundaries across mesh-networked infrastructures.

Policy Framework arrow_outward Alignment Stability V2.4

The Evidence

Cross-ecosystem preview of raw sensory input versus processed spatial understanding.

Raw Sensor Data Raw Input
Spatial Map FactorJEPA Map

Urban Occlusion Benchmarks

FactorJEPA successfully reconstructing 3D volumes in 98% occlusion scenarios in dense traffic.

Humanoid Raw Kinematic Input
Resilience Map KalariSena Policy

Zero-Shot Locomotion Transfer

PragyaVLA + KalariSena achieving 100% stability on uneven simulated and real hardware surfaces.

Performance Index

88%
JEPA Eff
65%
Baseline
94%
VLA Accuracy
72%
Standard VLA
0.42x
Parameter Efficiency Gain
42ms
Avg Global Latency
124
DoF tracking Support
99.8%
Safety Verification Rate

Partner with
the Lab.

We are selectively opening our spatial intelligence frameworks to global research institutions and industry leaders.

Global Headquarters

BITS, Goa Campus, India

Communications

[email protected]