Created
Aug 22, 2023 6:50 PM
Tags
Product TipsProduct Insights
WIP
- LLM made it possible to understand human language for applications
- Pre-LLM,
- Interpret user’s intention through UI component
- applications need complex UI/UX to capture user’s intention and perform the corresponding functionality
- ex: Miro has this complex 3-level tool bar for users to create content on their whiteboard, user would need to choose the visual types before they create content on the whiteboards.
- Onboarding (to learn the complex UX) can be a big business (data support here), ex: Intercom, User-pilot
- Post LLM
- Interpret user’s intention though natural language
- this is more intuitive for humans (no need to learn UI)
- Complex Ux can be simplified with a chat interface
- ex: MyMap cleaned up all the UI component for content creation, and left only a chat input box, user only need to type in prompt, on which why want to create, and the application would automatically create it for you.
- Tolerance for ambiguity in user intention
- In case of ambiguity in user’s intention, generative AI is pretty good at a best-guess, for example: in MyMap, you can type in prompt “help me to study physics? ”, users get to just type in what they want to achieve without specifying which visual types they want to create, and LLM can automatically config the best visual type suited for the need.