
Artificial Intelligence • Biotechnology • Non-profit
Future of Life Institute (FLI) is a non-profit organization dedicated to steering transformative technologies, such as artificial intelligence and biotechnology, towards benefiting life and minimizing large-scale risks. The Institute focuses on cause areas like artificial intelligence, biotechnology, and nuclear weapons, advocating for policy changes and public awareness to mitigate associated risks. FLI engages in policy advocacy, outreach, grantmaking, and hosting events to discuss safe development and governance of these technologies. It collaborates internationally with entities like the United Nations and the European Union to ensure that powerful technologies are harnessed for positive future outcomes.
11 - 50 employees
Founded 2014
🤖 Artificial Intelligence
🧬 Biotechnology
🤝 Non-profit
💰 $482.5k Grant on 2021-11
June 9
🇺🇸 United States – Remote
💵 $125k - $200k / year
⏳ Contract/Temporary
🟠 Senior
🤖 Artificial Intelligence

Artificial Intelligence • Biotechnology • Non-profit
Future of Life Institute (FLI) is a non-profit organization dedicated to steering transformative technologies, such as artificial intelligence and biotechnology, towards benefiting life and minimizing large-scale risks. The Institute focuses on cause areas like artificial intelligence, biotechnology, and nuclear weapons, advocating for policy changes and public awareness to mitigate associated risks. FLI engages in policy advocacy, outreach, grantmaking, and hosting events to discuss safe development and governance of these technologies. It collaborates internationally with entities like the United Nations and the European Union to ensure that powerful technologies are harnessed for positive future outcomes.
11 - 50 employees
Founded 2014
🤖 Artificial Intelligence
🧬 Biotechnology
🤝 Non-profit
💰 $482.5k Grant on 2021-11
• Develop quantitative system dynamics models capturing the interrelationships between technological, social, and institutional factors that influence AI risk landscapes • Design detailed analytical models and simulations to identify critical leverage points where policy interventions could shift offense-defense balances toward safer outcomes • Expand and operationalize our current offense/defense dynamics taxonomy and nascent framework, developing metrics and models to predict whether specific AI system features favor offensive or defensive applications • Build empirically-informed analytical frameworks using documented cases of AI misuse and beneficial deployed uses to validate theoretical models • Research how specific technical characteristics (capabilities breadth/depth, accessibility, adaptability, etc.) interact with sociotechnical contexts to determine offense-defense balances • Build public understanding of offense-defense dynamics through blog posts, articles, conference talks, and media engagement • Create tools and methodologies to assess new AI models upon release for their likely offense-defense implications • Draft evidence-based guidance for AI governance that accounts for complex interdependencies between technological capabilities and deployment contexts • Translate research findings into actionable guidance for key stakeholders including policymakers, AI developers, security professionals, and standards organizations
• A M.Sc. or higher in either Computer Science, Cybersecurity, Criminology, Security Studies, AI Policy, Risk Management, or a related field • Demonstrated experience with complex systems modeling, risk assessment methodologies, or security analysis • Strong understanding of dual-use technologies and the factors that influence whether capabilities favor offensive or defensive applications • Deep understanding of modern AI systems, including large language models, multimodal models, and autonomous agents, with ability to analyze their technical architectures and capability profiles • Experience in any of the following: Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Systems dynamics modeling, Network theory, Complexity science, Adversarial analysis, or Technical standards development • Ability to develop both qualitative frameworks and quantitative models that capture sociotechnical interactions, and comfort creating semi-quantitative semi-empirical models also grounded in logic • Record of relevant publications or research contributions related to technology risk, governance, or security • Exceptional analytical thinking with ability to identify non-obvious path dependencies and feedback loops in complex systems
• plus good benefits if U.S. employee
Apply NowJune 5
Collaborate on language projects with DATAmundi.ai as a freelance expert from the US.
May 7
Alignerr seeks an AI Trainer to educate AI models using PhD-level expertise, ensuring accuracy and relevance.
🇺🇸 United States – Remote
💵 $30 - $150 / hour
⏳ Contract/Temporary
🟡 Mid-level
🟠 Senior
🤖 Artificial Intelligence
March 19
As an AI Trainer at Alignerr, enhance AI models in Energy and Power with your expertise. Shape the future of AI while enjoying flexible hours.
🇺🇸 United States – Remote
💵 $15 - $60 / hour
⏳ Contract/Temporary
🟡 Mid-level
🟠 Senior
🤖 Artificial Intelligence
March 19
As an AI Trainer, shape the future of AI in Quantum Mechanics with Alignerr's innovative approaches. Leverage your expertise and contribute to AI advancements.
🇺🇸 United States – Remote
💵 $15 - $150 / hour
⏳ Contract/Temporary
🟡 Mid-level
🟠 Senior
🤖 Artificial Intelligence
March 19
Help develop AI models for the legal landscape at Alignerr by refining AI understanding of legal concepts.
🇺🇸 United States – Remote
💵 $15 - $60 / hour
⏳ Contract/Temporary
🟡 Mid-level
🟠 Senior
🤖 Artificial Intelligence