VAIL was founded in 2025 to address the need for transparency and verification of AI systems.
VAIL is built by a diverse team of entrepreneurs, engineers, and researchers united by a shared belief that AI systems should be transparent, verifiable, and trustworthy. We combine deep expertise in data science, machine learning, cryptography, and systems engineering to create the tools and infrastructure necessary for safe and secure AI systems.
Team Member | Contact |
---|---|
![]() Manish ShahCo-founder & CEO |
|
Manish is a repeat co-founder, technology enthusiast, and has been building companies for 20+ years with a passion for products & systems that help people live better lives. Previously
he was co-founder of LiveRamp ($RAMP), a data platform company, and PeerWell, an AI-powered digital health service. Manish holds a degree in
Computer Science from UC Berkeley.
|
|
![]() Jonah LeshinCo-founder & CTO |
|
Jonah is a repeat co-founder and technical leader. He previously led machine learning and product teams at Datavant focused
on privacy-preserving technology for healthcare data. Jonah is a published author in medical informatics and theoretical mathematics
and holds Math degrees from Northwestern University, University of Cambridge (Master's), and Brown University (PhD).
|
|
![]() Joel JewittChief Business Officer |
|
Joel is a technology operator and executive for cutting edge companies over several decades. He was on the founding team of Palm Computing,
Good Technology, and LiveRamp. Joel has a degree in Physics from Amherst College and an MBA from Stanford University.
|
|
![]() Daniel KangFounding Technical Advisor |
|
Daniel is an assistant professor at UIUC in the computer science department. He was a postdoc at UC Berkeley in the Sky lab, working with Ion Stoica and
as a PhD student at Stanford working with Peter Bailis and Matei Zaharia. Daniel also helped develop DawnBench and MLPerf. His research focuses on
verifiable computation for machine learning, adversarial robustness, and ML benchmarking. Daniel has a degree in Computer Science from MIT, a Masters from
University of Cambridge, and a PhD from Stanford University.
|
|
![]() Ian TimmisML Platform Engineer |
|
Ian has researched and developed deep learning methods for use-cases in natural language processing and computer vision. Applications include pose recognition, motion conformance,
range of motion measurements. He's deployed models to production from data centers to mobile devices. He has a Master's in Computer Science from Lawrence Technological University.
|
The principles that guide our work and define how we approach the challenge of making AI transparent and trustworthy.
Tools for implementing verifiable AI should be open source. Model metadata that attests to authenticity, safety, and performance on various benchmarks should be open. Verifiable AI tools should also be compatible with models and training methods that are not open source due to competitive or security concerns.
Assurances related to model training and execution should be technologically demonstrable, for example via cryptographic guarantees and hardware level attestation. While contractual guarantees of model usage are critical for building trust, they are not sufficient on their own due to the complexity of AI systems and associated possibility for misunderstanding.
AI developers should be able to post proof of their own model training and usage, and verify proof of others.
Verifiability tools should be easy to adopt for honest actors in terms of technical barrier to entry, computational cost, and compatibility with AI development. At the same time, these tools should be difficult for bad actors to abuse. Proofs and verifications, in addition to model metadata, should serve as a tool that the AI community, policy makers, and regulators can use to inform decisions related to AI development and governance.
There should be clear standards for the way information related to verifiability is generated, structured and made available. These standards should meet the needs of both researchers and practitioners.
We're always looking for talented individuals who share our vision of transparent and verifiable AI. Whether you're a researcher, engineer, designer, or domain expert, there may be a place for you on our team.
Explore Opportunities