-
AI Risk Management Framework
-
GPT-4 Takes a New Midterm and Gets an A - by Bryan Caplan
-
Skeptical optimists are the key to an abundant AI future
-
Our approach to AI safety
-
Edge AI Just Got Faster
-
Koala: A Dialogue Model for Academic Research – The Berkeley Artificial Intelligence Research Blog
-
LLMSurvey (up-to-date LLM papers archive in GitHub)
-
Against LLM Reductionism (models learn algorithms)
-
Reflecting on Reflexion (GPT-4 improvement via reflection after failure)
-
The case for how and why AI might kill us all
-
Llama.cpp 30B runs with only 6GB of RAM now (github.com/ggerganov)
-
Humanness in the Age of AI
-
Robots that learn from videos of human activities and simulated interactions (Meta)
-
AI Safety: Technology vs Species Threats
-
Europol warns ChatGPT is being used to commit crime
-
If AI scaling is to be shut down, let it be for a coherent reason
-
How should AI systems behave, and who should decide?
-
Our approach to alignment research
-
Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain for Diverse Applications (GPT-4 author)
-
HALTT4LLM - Hallucination Trivia Test for Large Language Models (GitHub)
-
What We Still Don’t Know About How A.I. Is Trained
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT With a Complete RLHF Pipeline
-
Pause Giant AI Experiments: An Open Letter
-
Existential risk, AI, and the inevitable turn in human history
-
GPT4: The quiet parts and the state of ML
-
ChatGPT plugins
-
Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models
-
OpenAI's Researcher Access Program
-
Prompt Engineering Tutorial
-
How to... use AI to unstick yourself - by Ethan Mollick
-
I am worried that we will not be able to contain AI for much longer.
-
GPT-4 Is Exciting and Scary
-
OpenAI announces team for GPT-4 contributions
-
Join the GPT-4 API Waitlist for Developers
-
OpenAI unveils GPT-4, a large multimodal language model
-
Microsoft lays off team that taught employees how to make AI tools responsibly
-
ChatGPT: Automatic expensive BS at scale
-
Microsoft introduces powerful VM series for generative AI
-
Alpaca: A Strong Open-Source Instruction-Following Model
-
Against AGI Timelines
-
Ezra Klein: "One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies"
-
The Inside Story of OpenAI's Surprise Hit, ChatGPT
-
The Long Singularity: Prepare for a Weirder Future
-
Unleashing Text-to-Image Diffusion Models for Visual Perception
-
The reasons I think AI timelines are short are pretty simple: GPT-4 is doing something as complex as what the human brain is doing in terms of information processing. This suggests we have enough compute for AGI, we just don't yet know how to shape it into a fully general agent
-
AGI in sight: our look at the game board
-
One of the biggest points of disagreement I have with the x-risk fearmongers is their assertion that AGI will be hostile to humanity, or at best indifferent. Here’s my take:
-
We are as far today from arbitrary matter replication and grey goo as from AGI.
-
But, all things considered with regard to AGI existential angst, I would prefer to be alive now to witness AGI than be alive in the past and not
-
Blake Lemoine: 'I Worked on Google's AI. My Fears Are Coming True'
-
Planning for AGI and beyond
-
John Oliver on new AI programs: ‘The potential and the peril here are huge’
-
The thing is, if you stuck to the original definition, you'd have to admit that "alignment" is a sci-fi narrative device rather than an actual AI safety concern. By extending it to mean "any improvement whatever", you can say you're doing alignment & that alignment is v important
-
We’re All Gonna Die with Eliezer Yudkowsky
-
I’ve yet to see one specific, concrete argument about why AI alignment is hard on Twitter. Only vague hand waving, paper clips and appeals to authority. Can someone point me to a thread where someone outlines a scary, concrete AI non-alignment scenario?
-
Despite deceptive alignment (AI intentionally acting aligned despite underlyingly being misaligned) not existing yet (we hope)…it seems to me there’s no reason to think that this will never become possible as models get more intelligent and knowledgeable about the world
-
AI alignment is tricky. Why? Let me explain with an experiment involving digital creatures that were supposed to jump but learned to "cheat" and circumvent the intentions of their creator. A fun story with serious lessons for AI, machine learning, and optimization
-
It may be worth thinking not just about AI alignment, but AI disalignment — ensuring AIs do not align with each other.
-
We tend to view AI ethics & alignment as about making sure AI doesn't do terrible things. Bing (accidentally?) takes ethics further. The AI refuses to let you do dumb things. Like it won't create a business plan for my drone-based guacamole delivery idea & tries to distract me.