Комментарии:
That's crazy 🔥🔥🔥
ОтветитьPlease wake me up 😢
ОтветитьGenerative UI 🎉 hearing it first time here
ОтветитьPlease start by learning machine learning fundamentals before popping put a bunch of regurgitated Ai models for which the goal of solving any problem at all in products has become a fairy tell. This solves nothing more than playing with tools that did the legwork already and want you to think you’re doing something complicated and brag worthy .
ОтветитьWhat about code privacy? How to scale here ?
ОтветитьDoes it mean, the SDK returns the react component like Weather or StockPrice directly to client. Or it just returns the supportive data to create your own Weather or StockPrice components?
ОтветитьIts technically cool but can someone explain the use case? If I have a service or site what is the benefit of having llm gen components? Is it seperation of concerns witb dynamic display? (I.e you could have thousands of possible components depending on the context, a level of dynamism not practical natively)?
Im new to all this so hope to hear from the wise old timers
This video is great!
How can the AI responses be saved to the backend? Do you just save the content as a string, and presumably it will be formatted as React code so that when the user returns it can be fetched and rendered as before?
Just with how Vercel is currently implementing it, it is more just a framework for calling LLMs. The idea of picking a widget to display is nothing new pre-LLM. I guess the idea of streaming the UI is new, but if I need to define the UI component beforehands, I don't see a reason to move to NextJS and SSR just for that. Presumably it will bring some build and run time optimizations since you won't have all of the components on client side.. but the trade-off of mixing too much frontend logic into backend really won't work for large applications.
Also, unless LLM is generating the UI itself, I really cannot see the difference between asking backend to return the UI v.s. returning the data representing the state of the UI. I doubt LLM will be capable of generating interactive UI as well.