OpenAI has recently announced the general availability of its latest text generation model, GPT-4, via its API. This news has sparked a wave of excitement among developers who have a history of successful payments with OpenAI, as they can now access the advanced features of GPT-4.
GPT-4, like its predecessors, has been trained on publicly available data. However, it brings to the table a host of improvements and new capabilities. One of the most anticipated features of GPT-4 is its ability to work with images. Although this feature is currently being tested in collaboration with Be My Eyes, a developer of image recognition applications, it promises to revolutionize the way we interact with AI.
OpenAI envisions a future where chatbot-based models can support any use case. The advent of GPT-4 brings us one step closer to this vision. The model’s ability to generate text and even source code, accepting both text and image inputs, sets it apart from its predecessor, GPT-3.5, which only accepted text.
OpenAI has plans to allow developers to fine-tune GPT-4 and GPT-3.5 Turbo using their own data. This feature is expected to be available later in 2023. In addition, OpenAI has made its DALL-E 2 APIs for image generation and Whisper for speech-to-text conversion generally available.
However, with the introduction of these new models, OpenAI plans to retire some of its older models, including GPT-3 and its derivatives, to optimize computational power. Developers using these older models will need to manually update their integrations by January 4, 2024.
The release of GPT-4 marks a significant milestone in the journey of AI development. As millions of developers have requested access to the GPT-4 API since March, the range of innovative products leveraging this model is growing every day.