Get state-of-the-art model performance from your data

Introducing Dark Matter, an embedding model that creates richer representations of your data to make any model, in any domain, higher-performing across metrics. Feeding your model learned, statistically optimal features for your task improves performance metrics, reduces engineering time, and frees your team up to focus on experimentation.

Introducing Dark Matter, a new step in the data science pipeline

Dark Matter uses a new objective function that takes snapshots of the loss landscape and learns how to create statistically optimal features from those snapshots. They’re encoded into an embedding that represents new information to your model, effectively distilling signal from noise to make relationships easy for the model to understand.

Not only does this improve model metrics like accuracy, precision, and f1 scores across the board, it also means less time spent on data and feature engineering, freeing up your team to focus on experimentation and applications.

Check out our Research and Documentation for more technical details.

Lowering barriers to effective machine learning

Dark Matter lowers the barrier to entry for data scientists to achieve state-of-the-art model performance. Our algorithm provides superior results with limited or sparse data, simpler models, and less compute, opening the door to new modeling capabilities and applications.

Better Predictions

Gain a competitive advantage by capturing complex, non-linear relationships in your data that are otherwise invisible to your model.

Faster Iteration

Streamline model training and iterate faster by giving your model a dense, machine-friendly representation of only what matters for your prediction.

Lower Costs

Cut compute costs and time, freeing up your team’s bandwidth and boosting efficiency with one simple integration.

Slots in Seamlessly

Surprisingly lightweight, Dark Matter represents a transformative new step in the data science pipeline that doesn’t alter your existing processes.

Domain and Model Agnostic

We make any model in any domain better simply by creating richer representations of the relationships in your existing data.

Secure Integration

Integration is available on-premises or via cloud API. Retain total control of your pipeline, keeping the privacy and integrity of your data intact.

Securely installs in under 5 minutes

Dark Matter is a surprisingly lightweight solution that slots seamlessly into your data science pipeline with just a few lines of code. Integrates easily on-prem for total privacy or run via cloud API for rapid scalability.

				
					// Import
import ensemblecore as ec

// Authentication
user = ec.User()
user.login(username='USERNAME', password='PASSWORD', token='TOKEN')
				
			

Improving predictive power across applications

Dark Matter boosts the productivity of any ML pipeline, reducing training compute and preserving valuable resources. Works regardless of industry, model type, or prediction task — even with limited data.

Example Applications

Forecasting

Price predictions
Supply and demand
Customer churn

Recommendations

Ad placement
Content suggestions
Product personalization

Optimized Training

Reduces compute
Train on limited / sparse data

Example Industries

Ready for better model performance?

Get in touch to learn more and book a demo.

Frequently Asked Questions

Dark Matter is available for on-premises installation on your machine and using your compute resources, enabling you to retain complete control over your proprietary data. On-prem deployment ensures that we never see your data.

We encourage you to try Dark Matter with your data and model to compare the results with your existing pipeline. Most customers use it in a testing environment with sample data to minimize resource requirements before putting it into production. If you’d like to set up a trial, please fill out the form here and we will be in touch.

While Dark Matter does create new variables, its mechanics are fundamentally different. Traditional synthetic data recreates existing distributions from Gaussian noise, so no new information is created. This has the virtue of anonymizing data (which is essential for some regulated industries), but it has minimal impact on predictive accuracy as it mirrors the statistical properties of your data.

In contrast, Dark Matter learns how to create embeddings that have different statistical properties and distributions. Using our new machine learning algorithm, it’s able to converge on nearly orthogonal features that measurably improve predictive accuracy.

One of the primary benefits of Dark Matter is that it lowers the barrier to useful predictive performance by creating richer representations of your data. That said, there is a theoretical minimum threshold of data quality and volume that can be useful (i.e. if what you’re working with is mostly noise, it probably won’t help). Our rule of thumb is that if you have a working data science pipeline that’s generating mediocre predictions, Dark Matter can improve its performance. 

Backed by:

logo-Salesforce-Ventures-white
logo-motivate-venture-capital-white
logo-M13-white
logo-AMPLO-white

Research

Dive into our cutting-edge research.

Feature Enhancement: A New Approach for Representation Learning (Whitepaper)

Discover a novel approach to representing complex, non-linear relationships inherent in real-world data.

Feature Programming for Multivariate Time Series Prediction (ICML)

Learn about a new framework for automated feature engineering from noisy time series data.

Resources

Blog
Op-eds and thoughts on the state of machine learning and AI
Documentation
Developer support, API docs, quick-start guide
Published Research
Ensemble research, papers, and conferences

Join the Waitlist

Early Access Form