Vercel releases the first v0 front - end generation AI model, which is in the testing phase.
Vercel Releases Its First v0 Front - End Generation AI Model, Which Is in the Testing Phase
The newly launched model is named v0 - 1.0 - md. Vercel says it is designed specifically for building modern web applications. This multimodal model supports text and image inputs, offers a context window of 128,000 tokens and an output limit of 32,000 tokens. It is priced at $3 per million input tokens and $15 per million output tokens. It provides features such as "auto - fix" for common coding errors and "quick edit", and can stream inline changes in real - time during code generation. Crucially, v0 - 1.0 - md uses an OpenAI - compatible API, which means you can plug it into existing tools (such as Cursor, Codex) or custom applications (as long as these applications already support OpenAI language specifications), including Vercel's own AI SDK. It also supports function and tool calls and provides low - latency streaming responses. Developers can try out this new model in the Vercel AI Playground to see how it handles different prompts. Currently, access to the v0 API (and the v0 - 1.0 - md model) is in the testing phase. You need to purchase a Premium or Team package on Vercel and enable usage - based billing. First, you need to obtain an API key from v0.dev, and then send a request to its POSTapi.v0.dev/v1/chat/completions endpoint and authenticate using a bearer token. The daily message limit is approximately 200, and the context size limit is consistent with its advertised features. However, Vercel points out that if these limits are reached, you can apply for a higher limit. If you want to delve into the details or learn how to set it up, the official v0 documentation on the Vercel website contains everything you need, including examples. ...
PC Version: https://www.cnbeta.com.tw/articles/soft/1501612.htm
Mobile Version: https://m.cnbeta.com.tw/view/1501612.htm
via cnBeta.COM Chinese Industry News Station - Telegram Channel