GPT-4 Technical Report https://cdn.openai.com/papers/gpt-4.pdf

OpenAI Charter https://openai.com/charter

GPT-4 team breakdown tweet https://twitter.com/emostaque/status/1646056127883513857

Existential risk, AI, and the inevitable turn in human history https://marginalrevolution.com/marginalrevolution/2023/03/existential-risk-and-the-turn-in-human-history.html

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 https://youtu.be/L_Guz73e6fw

Our approach to AI safety https://openai.com/blog/our-approach-to-ai-safety

Our approach to alignment research https://openai.com/blog/our-approach-to-alignment-research

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment https://youtu.be/Yf1o0TQzry8

Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to

Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371 https://youtu.be/VcVfceTsD0A

The alignment problem from a deep learning perspective https://www.lesswrong.com/posts/KbyRPCAsWv5GtfrbG/the-alignment-problem-from-a-deep-learning-perspective

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says https://www.huffpost.com/entry/artificial-intelligence-oxford_n_5689858

Instrumental convergence https://arbital.com/p/instrumental_convergence/

We must slow down the race to God-like AI https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2

AI Safety: Technology vs Species Threats https://blog.eladgil.com/p/ai-safety-technology-vs-species-threats

My AI Safety Lecture for UT Effective Altruism https://scottaaronson.blog/?p=6823

AI Doomsday For People Who Don’t (Yet) Wear Fedoras https://every.to/chain-of-thought/a-primer-on-ai-doom-for-people-who-don-t-yet-wear-fedoras

AGI Ruin: A List of Lethalities https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368 https://youtu.be/AaTRHFaaPG8

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality https://youtu.be/41SUp-TRVlg