Open Augments

Research & Data Strategy

Building open-source AI frameworks and educational tools that empower researchers, nonprofits, and public-interest organizations to do the most good


Enhance society's pursuit of the common good with responsible, human-centered AI

AI tools available to the public are becoming increasingly powerful -- but much of this infrastructure remains locked behind proprietary platforms, requires substantial technical expertise to deploy, and prioritizes automation and costs-savings over careful human oversight.


We address this gap directly by building rigorous, transparent, and approachable tools that apply the cutting edge of AI to accelerate crucial public-interest work with humans always in the loop. We develop these tools alongside practitioners for their exact needs and contexts -- learning, building, and refining together -- before releasing what we’ve co-created as portable, open-source frameworks to transform practice across entire sectors.


We are a mission-driven organization, guided by the conviction that the infrastructure for responsible, AI-augmented work should be widely available to anyone who would use it to help others. That in mind, we are committed to ensuring that every tool is honed in close collaboration with real practitioner experts, every framework is open-source, and every educational resource is freely shared.

Open Source, Always

The infrastructure to harness AI responsibly and effectively should be a public good for the benefit of all, not a competitive advantage.

Augmentation, Not Replacement

We build tools that amplify and enhance what skilled and caring individuals can do. Every framework we design keeps expert humans central to the strategy, oversight, and decision-making where it counts.

Rigor & Transparency First

AI-empowered work must always be accountable to the people it serves. Our non-negotiables: AI outputs must be auditable and reproducible, and every limitation must be honestly acknowledged.

Practitioner-first, then open to all

Every Open Augments project follows the same path -- from embedded collaboration to open-source infrastructure:

1

Pilot

We embed alongside expert practitioners to support how they actually work -- co-developing AI-powered tools and workflows that help them enhance and expand their services.

2

Refine

We iterate rapidly with practitioners until the framework is battle-tested, rigorous, and ready to scale -- in weeks, not months.

3

Share

We release every framework as open-source infrastructure, transforming individual pilots into portable standards for entire practice communities.

4

Guide

We provide hands-on training, documentation, and direct support so practitioners more broadly can adopt these frameworks responsibly and effectively in their own contexts.

DAAF, the Data Analyst Augmentation Framework

A force-multiplying exoskeleton for human researchers

The Data Analyst Augmentation Framework is an open-source workflow for skilled researchers that rapidly accelerates quantitative data analysis while keeping human expertise core to the research process. Built on Claude Code and released under LGPL-3.0, DAAF allows data professionals to harness the latest advances in agentic AI coding workflows while maintaining rigor, reproducibility, and transparency every step of the way. Designed to be hyper-extensible and readily applicable to any data domain, the framework is freely available to every researcher and always will be. DAAF embodies our approach in action: born from real research practice, battle-tested through daily use, and now in active dissemination to a growing community of data professionals.

240+
Unique Users
<10
Minutes to Install
40+
Included Datasets
5-10x
Acceleration

Peer review is dead; Long live peer review!

Six steps towards building a more optimistic AI-empowered future for academia and science, together

Peer review is dead; Long live peer review!

Brian Heseung Kim

BK

Founder & Chief Data Scientist

Open Augments LLC

Brian Heseung Kim is a data scientist, educator, and education policy researcher now specializing in open-source AI infrastructure for the public good. He holds a Ph.D. in Education Policy from the University of Virginia, with a focus on quantitative methods and education data science. He previously served as Director of Data Science, Research, and Analytics at The Common Application, where he led research initiatives and core infrastructural capacity and AI investments for the nation’s largest college applications data resource.

Brian’s research on college admissions -- including pioneering the careful and robust application of LLM-related tools to educational data as early as 2019 -- has been published in journals like Educational Researcher, American Educational Research Journal, and Education Finance and Policy, and covered by outlets like the Wall Street Journal, New York Times, Brookings Institution, and Bloomberg. His work has been generously supported by the NAEd/Spencer Dissertation Fellowship, Ascendium Foundation, Carnegie Corporation of New York, Fidelity Foundation, Institute of Education Sciences, and the Gates Foundation.

Full bio, CV, and research portfolio →