
RESEARCH
Human-Aligned Evaluation Systems for
Algorithmic & AI-Driven Environments
Independent research infrastructure for the algorithmic age
The Mandate
“AI and large-scale systems should work in the favor of human development. That is the mandate.”
We envision a world where the most advanced technologies are certified as human-designed — organic, natural integrations into the rhythm of daily life that enhance human capacity rather than extract from it. Technology that earns the right to shape human experience through rigorous, independent evaluation.
The Problem
Human neurological systems evolved over hundreds of thousands of years in low-information, high-embodiment environments. In roughly 15 years, algorithmic content delivery has introduced stimulus patterns that exploit evolutionary vulnerabilities at speed and scale that outpaces both individual adaptation and institutional response.
Correlated with increased screen mediation of social and romantic interaction.
Primary relational bonds forming with algorithmic content over embodied human relationships.
Anxiety, depression, and attention disorders rising sharply in high-exposure demographics.
Increased reliance on psychoactive substances correlating with digital environment exposure.
Content environments introducing belief systems that conflict with users' lived reality.
Traditional institutions losing relevance as algorithmic environments become primary context for identity.
Engagement optimization directly competing with physical activity and real-world participation.
Reduced capacity for nuanced articulation and complex reasoning in heavy-consumption populations.
“You need to demonstrate that we can handle it if it is being built.”
Rob Bensinger
What We Build
From foundational research to commercial certification to long-term intervention — a full-stack approach to human-aligned evaluation infrastructure.
A structured, peer-reviewable methodology for evaluating the human-alignment properties of algorithmic systems.
Translating foundational research into adoptable compliance products for governments, platforms, and institutions.
Designing systems that go beyond measurement to actively improve outcomes at population scale.
Readables
Foundational research artifacts. Each readable leads to a dedicated page with full content and downloadable PDF assets.
The Approach
Define a taxonomy of human-alignment dimensions: mental health, relational quality, developmental trajectory, autonomy preservation, cognitive diversity.
Build labeled evaluation datasets from human expert assessments of algorithmic content feeds.
Train classifiers to score content samples against the taxonomy at scale.
Construct composite indices from classifier outputs — the Behavioral Risk Index.
Validate against physiological and self-reported wellbeing outcomes.
Iterate and open-source the evaluation methodology for independent verification.
This pipeline is tractable today using existing LLM infrastructure, multi-modal analysis capabilities, and behavioral science methodologies. The contribution is in the evaluation framework design, the labeled datasets, and the validation methodology.
Playground
Full-stack research applications and visualizers. Each experience ships with its own landing page, explainer, and source code — demonstrating our evaluation methodology in practice.

Analyze any social media feed sample and receive a structured Behavioral Risk Index scorecard.
/playground/feed-auditor
Interactive visualization of the Behavioral Risk Index dimensions and scoring methodology.
/playground/bri-explorer
Run synthetic user profiles through algorithmic systems to project behavioral outcomes over time.
/playground/impact-simulatorThe Team
“Wisdom” — Arabic
A unique combination of technical depth, ecosystem fluency, and personal conviction. Fijian heritage, family-oriented worldview. Has observed firsthand the gaps and divides from socioeconomic, religious, cultural, and geographic differences in technology access and impact.
The commonality across all these differences: we're all human. That's the foundation.
AI Safety
Redwood Research
Community
Cerebral Valley, AI Collective
Industry
Extropic, Applied AI Startups
Ecosystem
EA Community, Bay Area Native
Seeking 5–7 advisors across policy, AI safety, behavioral science, and platform experience for credibility, network access, and domain expertise.
Long-Term Vision
The name is intentional. The Golden Gate is a threshold — you cross it and something is verified, certified, aligned.
To become the independent standard-setting body for human-alignment evaluation across all algorithmic and AI systems that interact with human cognition and behavior at scale.
