Consistency Models
Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation.
Consistency Models Read More »
Auto Added by WPeMatico
Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation.
Consistency Models Read More »
Using new techniques for scaling sparse autoencoders, we automatically identified 16 million patterns in GPT-4’s computations.
Extracting Concepts from GPT-4 Read More »
We’ve trained and are open-sourcing a neural net called Whisper that approaches human level robustness and accuracy on English speech recognition.
Introducing Whisper Read More »
In order to share the magic of DALL·E 2 with a broad audience, we needed to reduce the risks associated with powerful image generation models. To this end, we put various guardrails in place to prevent generated images from violating our content policy.
DALL·E 2 pre-training mitigations Read More »
Large neural networks are at the core of many recent advances in AI, but training them is a difficult engineering and research challenge which requires orchestrating a cluster of GPUs to perform a single synchronized calculation.
Techniques for training large neural networks Read More »
We trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using only a small amount of labeled contractor data. With fine-tuning, our model can learn to craft diamond tools, a task that usually takes proficient humans over 20 minutes (24,000 actions). Our
Learning to play Minecraft with Video PreTraining Read More »