Open Projects

Discover active AI projects from labs and companies. Submit a research proposal to initiate a verified collaboration.

For Labs

Labs on the Pro tier ($299/mo) can post up to 3 active briefs concurrently and use Scout Search.

Upgrade to Lab

Filter Domains

Company Type

OpenAI Research

Alignment Researchers for Sparse Autoencoders

The Problem

We are aggressively expanding our interpretability team. We need researchers capable of training SAEs on frontier models to accurately map concept activations. The goal is to isolate and steer safe behavior primitives in raw parameters.

Who we need

Demonstrated experience scaling unsupervised feature extraction on highly dimensional activations. PyTorch mastery is required.

InterpretabilityAI SafetyLLMs

Anthropic

Mechanistic Interpretability of Claude 3

The Problem

Seeking to reverse-engineer high-level capability structures inside Claude's latest multimodal weights. Must be able to manipulate activation geometries and build new probes that accurately read and write concept clusters directly.

Who we need

Researchers who have published at NeurIPS/ICLR on SAEs or Mechanistic mapping. Deep linear algebra intuition required.

InterpretabilityMath

Hugging Face

Optimized Multi-Modal Pipelines for Accelerate

The Problem

We want to unify huge unaligned vision and dataset pipelines under the standard Accelerate/Transformers ecosystem without sacrificing VRAM. We need 10x throughput increases for edge deployments.

Who we need

Systems optimization experts. CUDA, Triton, and intimate familiarity with PyTorch internals and Distributed Data Parallel.

Systems EngineeringOpen SourceVision

Scale AI

DPO & RLHF Mass-pipeline Synthesis

The Problem

Building the next generation of scalable RLHF techniques that natively handle multi-agent deliberation trajectories. Direct Preference Optimization needs to scale gracefully to billions of synthetic pairs.

Who we need

Reinforcement Learning PhDs with experience managing massive cluster distributions. Focus on PPO optimization.

RLHFSynthetic Data