
Artificial Intelligence • Biotechnology • Non-profit
Future of Life Institute (FLI) is a non-profit organization dedicated to steering transformative technologies, such as artificial intelligence and biotechnology, towards benefiting life and minimizing large-scale risks. The Institute focuses on cause areas like artificial intelligence, biotechnology, and nuclear weapons, advocating for policy changes and public awareness to mitigate associated risks. FLI engages in policy advocacy, outreach, grantmaking, and hosting events to discuss safe development and governance of these technologies. It collaborates internationally with entities like the United Nations and the European Union to ensure that powerful technologies are harnessed for positive future outcomes.
11 - 50 employees
Founded 2014
🤖 Artificial Intelligence
🧬 Biotechnology
🤝 Non-profit
đź’° $482.5k Grant on 2021-11
June 6

Artificial Intelligence • Biotechnology • Non-profit
Future of Life Institute (FLI) is a non-profit organization dedicated to steering transformative technologies, such as artificial intelligence and biotechnology, towards benefiting life and minimizing large-scale risks. The Institute focuses on cause areas like artificial intelligence, biotechnology, and nuclear weapons, advocating for policy changes and public awareness to mitigate associated risks. FLI engages in policy advocacy, outreach, grantmaking, and hosting events to discuss safe development and governance of these technologies. It collaborates internationally with entities like the United Nations and the European Union to ensure that powerful technologies are harnessed for positive future outcomes.
11 - 50 employees
Founded 2014
🤖 Artificial Intelligence
🧬 Biotechnology
🤝 Non-profit
đź’° $482.5k Grant on 2021-11
• Develop techniques for discovering threat models and generating risk pathway analyses that capture societal and sociotechnical dimensions • Model multi-node risk transformation, amplification, and threshold effects propagating through social systems • Contribute to the design of robust technical governance frameworks and assessment methodologies for catastrophic risks, including loss-of-control scenarios • Provide strategic and tactical quality control for the team’s research, ensuring conceptual soundness and technical accuracy • Drive or take ownership of original research projects on comprehensive risk management for advanced AI systems, aligned with the team's objectives • Collaborate across CARMA teams to integrate risk assessment paradigms with other workstreams • Contribute to technical standards and best practices for the evaluation, risk measurement, and risk thresholding of AI systems • Craft persuasive communications for key stakeholders on prospective AI risk management
• 5+ years of experience in AI safety, alignment, and/or governance. We are open to candidates at different levels of seniority who can demonstrate the required depth of expertise. • Strong understanding of multiple risk modeling approaches (causal modeling, Bayesian networks, systems dynamics, etc.) • Experience with systemic and sociotechnical modeling of risk propagation • Excellent analytical thinking with ability to identify subtle flaws in complex arguments • Strong written and verbal communication skills for technical and non-technical audiences • Publication record or equivalent demonstrated expertise in relevant areas • Systems thinking approach with independent intellectual rigor • Track record of constructive collaboration in fast-paced, intellectually demanding environments • Comfort with uncertainty and rapidly evolving knowledge landscapes • Background in complex systems theory, control theory, cybernetics, multi-scale modeling, or dynamical systems • Work history at AI safety research organizations, technical AI labs, policy institutions, or adjacent risk domains • Experience with quality assurance processes for technical research • Ability to model threshold effects, nonlinear dynamics, and emergent properties in sociotechnical systems • Understanding of international dynamics and power differentials in AI development • Ability to balance consideration of both acute and aggregate AI risks • Experience with causal, Bayesian, or semi-quantitative hypergraphs for risk analysis • Demonstrated methodical yet creative approach to framework development
• plus good benefits if a U.S. employee
Apply NowMay 9
Senior Risk Engineer acting as a liaison for safety and risk mitigation. Joining an entrepreneur-led firm delivering comprehensive insurance solutions.
April 24
201 - 500
Join SAFE Security as a Risk Advisor, empowering teams towards a safer digital future and effective risk management.
April 18
Role assists Placement Executive with marketing and training in insurance risk management.
April 14
Lead client experience strategies at an award-winning insurance brokerage firm to enhance services and retention.
April 11
Genesis Consulting seeks a Data Governance Lead to support federal data management initiatives.