Pioneering Neural Database Technology

Databases, Reimaginedwith Neural Intelligence

Aedus replaces traditional tabular storage with Implicit Neural Representations — encoding entire databases as compact neural networks that query instantly, compress massively, and integrate natively with AI pipelines.

100×
Compression
<1ms
Query Latency
Resolution
INPUTHIDDEN 1HIDDEN 2OUTPUT
x, y, z
Coordinates
sin(ωx)
Encoding
ReLU
Activations
f(x)
Data Values
✓ 847× smaller
⚡ 0.3ms query
Scroll
The Problem

Traditional Databases Are Breaking Under AI

As datasets grow from gigabytes to petabytes, legacy row-and-column storage buckles under the weight of modern AI workloads.

Exponential Storage Costs

Sensor data, genomics, climate simulations, and media assets are doubling every 18 months. Row-based storage scales linearly — and so do your bills.

AI Cannot Natively Query

Neural networks speak tensors and gradients. Traditional SQL returns discrete rows — requiring costly ETL pipelines every time you want to train or infer.

Discrete, Not Continuous

Reality is continuous. Databases are discrete. Every interpolation, super-resolution, or missing value requires expensive post-processing that degrades accuracy.

Traditional Database847 GB
idlatlontemphum
100137.774-122.41918.372
100237.774-122.41918.471
100337.774-122.41918.471
100437.775-122.42018.570
100537.775-122.42018.570
...............
Highlighted rows are near-duplicate — wasted storage
Redundant data~73%
Aedus INR Database1.02 GB
// Query any coordinate — even unsampled
aedus.query({
lat: 37.7742,
lon: -122.4194,
time: 1704067200
})
→ { temp: 18.41, hum: 71.2}
// 0.28ms — interpolated, not stored
847×
smaller than raw
resolution queries
How It Works

From Raw Data to Neural Intelligence

Aedus transforms any structured dataset into a compact neural representation in three steps — no data scientists required.

1
Step 1
.CSV
lat,lon,val
37.7,−122.4,18.3
37.8,−122.5,19.1
...
.HDF5
/data/grid
shape: [4096,4096]
dtype: float32
...

Ingest Your Dataset

Connect any structured data source — time-series, spatial grids, sensor networks, scientific simulations. Aedus normalizes coordinates and values automatically.

CSV · Parquet · HDF5 · S3
2
Step 2
Training EpochsLoss

Train the Neural Representation

A lightweight MLP is trained to fit your data as a continuous function. Aedus auto-tunes architecture depth, width, and encoding to match your accuracy targets.

SIREN · NeRF · Hash Encoding
3
Step 3
>query(37.77, −122.41)18.41°C0.28ms
>gradient(lat)∂f/∂lat = 0.120.31ms
>query(37.78, −122.42)18.67°C0.24ms
0.28
ms avg

Query Continuously

Query the trained network at any coordinate — including points never in the original dataset. Get interpolated, extrapolated, or derivative values in sub-millisecond time.

REST · gRPC · Python SDK

Built on SIREN Architecture

Aedus uses sinusoidal activation functions (SIREN) as its foundational architecture — enabling the network to learn high-frequency details and continuous derivatives of any dataset. Unlike ReLU networks, SIREN can represent fine-grained structure at any resolution.

Sinusoidal activations
Positional encoding
Multi-scale learning
Differentiable queries
// SIREN forward pass
Input: x̂ = (x, y, t)
positional encoding
Layer 1: sin(Wx + b)
sinusoidal activation
Layer 2: sin(Wx + b)
high-freq learning
Output: f(x̂) ∈ ℝⁿ
continuous values

Aedus vs Traditional Storage

Benchmark: 1 year of global weather sensor data (3.2 billion readings)

Storage Size
Traditional: 3.2 TBAedus: 38 GB
Traditional
100%
Aedus
1.2%
Query Latency (p99)
Traditional: 420msAedus: 17ms
Traditional
100%
Aedus
4%
Training Data Transfer
Traditional: 3.2 TB ETLAedus: Direct grad
Traditional
100%
Aedus
3%
Resolution Flexibility
Traditional: Fixed schemaAedus: Continuous
Traditional
15%
Aedus
100%
100×
Average Compression
vs raw tabular storage
<1ms
Query Latency
any coordinate, any resolution
Output Resolution
continuous function, not discrete rows
99.7%
Reconstruction Accuracy
on benchmark datasets
Use Cases

Built for Data-Intensive Science

Any domain with continuous, high-volume data can benefit from neural representation — here are the verticals we're targeting first.

🌍
Geospatial

Planetary-Scale Mapping

Replace multi-terabyte GIS datasets with INR models that store entire planet surfaces at arbitrary resolution. Query elevation, temperature, and satellite imagery at any lat/lon without tile pyramids.

200×
Size reduction
Infinite
Zoom levels
🧬
Genomics

Continuous Genomic Data

Encode protein structures, epigenomic signals, and variant databases as neural functions. Enable gradient-based biological queries impossible with SQL — find nearest neighbors in embedding space directly.

150×
Compression
Differentiable
Query type
🌡️
Climate Science

Weather & Climate Reanalysis

Compress decades of global weather reanalysis (ERA5, MERRA-2) into compact INR models. Query any atmospheric variable at any pressure level, time, and location — even between grid points.

0.8%
ERA5 size
99.7%
Accuracy
🎬
Media & Streaming

Neural Video Compression

Encode video streams as spatiotemporal INR models. Stream any frame, crop, or timestamp on-demand with no keyframe dependencies — enabling truly random-access video at a fraction of H.265 storage.

12× smaller
vs H.265
0ms
Seek time
📡
IoT & Sensors

Sensor Network Telemetry

Replace time-series databases like InfluxDB for IoT workloads. Train INRs on rolling windows of sensor data, enabling compressed, queryable, interpolated telemetry from millions of devices.

85× less
Storage vs InfluxDB
Native
Interpolation
⚛️
Scientific Simulation

Physics & CFD Surrogates

Compress fluid dynamics, molecular dynamics, and finite-element simulations into neural surrogates. Query simulation state at any timestep or spatial point without re-running costly solvers.

500×
vs raw sim data
Real-time
Inference
Deep Technology

The Science Behind Aedus INR

Built on a decade of neural representation research — NeRF, SIREN, Instant-NGP — and extended to general-purpose database workloads.

Full Stack Architecture

1
Data Sources
CSV / ParquetHDF5 / NetCDFSQL DatabasesObject Storage
2
Aedus Ingestion Layer
Coordinate normalizationValue scalingPatch decompositionMetadata schema
3
INR Training Engine
SIREN / Hash-GridAuto-architecture searchDistributed GPU trainingConvergence monitoring
4
Neural Database
Compressed NN weightsVersion controlIndex structuresReplication
5
Query Interface
REST / gRPC APIPython / JS SDKSQL-like syntaxStreaming support

Core Innovations

Sinusoidal Positional Encoding

SIREN

Unlike ReLU activations that lose high-frequency detail, sinusoidal activations (SIREN) preserve fine structure at all scales — critical for scientific and geospatial data.

#

Hash-Grid Encoding

Instant-NGP

Multi-resolution hash tables accelerate training by 100× vs vanilla MLP approaches, enabling Aedus to fit large datasets in minutes rather than hours.

Differentiable Queries

Auto-diff

Every query is differentiable by construction. Compute gradients, Hessians, or integrals analytically — enabling physics-informed queries impossible in discrete databases.

GPU-Native Storage

CUDA

INR weights live natively on GPU memory. Neural network inference is training-time too — eliminating all CPU↔GPU transfer overhead in ML pipelines.

Research Foundation

ECCV 2020
NeRF
Representing Scenes as Neural Radiance Fields
NeurIPS 2020
SIREN
Implicit Neural Representations with Periodic Activations
SIGGRAPH 2022
Instant-NGP
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
ICML 2023
COIN++
Neural Compression Across Modalities
Early Access — Limited Spots

Ready to Compress
Your Data Universe?

Join the waitlist for early access to Aedus. We're onboarding select teams in climate science, genomics, and geospatial analytics first.

No credit card required
Free during beta
Cancel anytime
DR

“We were storing 40 years of ERA5 reanalysis across a 12-node cluster. Aedus beta compressed it to a single NVMe card with better query performance than our Dask cluster. I genuinely couldn't believe it.”

Dr. R. Chen · Computational Climate Lab, Berkeley