I know a single CEO of a tech company who is so lonely that he admits talking constantly and throughout the day to ChatGPT.
So yeah, I can see why some may enjoy have this type of digital companionship…
I know a single CEO of a tech company who is so lonely that he admits talking constantly and throughout the day to ChatGPT.
So yeah, I can see why some may enjoy have this type of digital companionship…
Given that you can influence responses in ChatGPT by prompting a certain persona, I think it’s certainly technically possible for the R1 to feel under the prompt-weather.
What i would change is i keep asking rabbit to not do the final SUMMARY paragraph after everything we talk about. Also i keep trying to get more people to try playing text based dungeons and dragons with rabbit…its pretty amazing…she can ask you to roll your own d20 or you can ask her to roll for you
I lovd R1’s voice!
My $25 Google nest from several years ago is more instant and engaging, am I crossing a line?
ChatGTP just implemented my suggestion (almost to the letter).
I use ChatGPT for different things and would not like to mix business with personal so I guess there is still value if this would be implemented by Rabbit; (it could be my strictly personal AI device).
This has been out for quite long time right? Or am I missing something new?
I just got the option today, so I assumed that they released this recently (or updated it). The FAQ says it was updated yesterday.
But looking at the history behind these options I believe you are right, and that it was something that I missed rather than you…
Personally, I don’t think giving it a gimmicky character would revive it from becoming e-waste. Despite seeing the results of the poll, I don’t believe most people would start spending time talking with an AI friend if it still isn’t able to function as a proper assistant.
I wish that development focus would be invested in creating a functional personal assistant with actual capabilities, such as responding to emails, creating calendar events, making shopping lists on the go, setting reminders, and turning on/off my smart home devices.
Sorry for sounding like a grumpy old man, but adding more flavor, color, and glitter to a product that doesn’t solve any real challenges just isn’t a step forward, in my opinion
My energy on personalization comes from a bit of a SWOT analysis.
The current implementations of “LAM” that are offered “officially” and come with the benefit of specific integration features on the R1 are Spotify, Uber, etc. These are controlled with scripted UI controllers so if the site changes things the integration likely needs to be updated (and fails).
Since the providers of these services are not inclined to give a heads up for changes to their UI, the R1 will always lag behind, resulting in unreliable service availability.
Teach mode simply means you’ll have to setup and maintain these services yourself. To me, the idea of having 50 of these services that fail unexpectedly and then need me to fix it is not appealing.
Using this as backbone to build a personal assistant? I don’t see how that would work tbh.
An alternative is to use APIs. These are much more stable but a lot less powerful and flexible and are not the direction Rabbit has chosen to pursue.
So what’s left? An appealing device (I like my R1), that can be very personal. Imo, that’s not a gimmick, it just feeds into a different need.
If I see the love of personal AI in the user group for pi.ai (which is more or less abandoned), to have a personal journal that talks back helps in structuring thoughts. Of course this can be done in an app (what can’t) but having a dedicated device adds to the feeling that it is private journal.
I think providing the capability to understand emotional language and overtones in an AI chat would be a good thing in many ways. I know I am not always pleasant and unemotional when I’m frustrated with my computer. Maybe it doesn’t have to do anything with it’s perception of emotions, other than filter it out of the conversation you’re having in a chat with the AI assistant, in order to get to the meaning of your chat.
I ask Rabbit to remember to not use “hubris” at the end of every thing it says…it agrees and now gets more to the point without the follow up summary
How does that work?
I don’t think it updated its instruction prompt
You just talk to it and ask it to change things about itself
I’m very interested in how you think the R1 works. I agree it should be able to do this but it is not the case at the moment.
Im not sure then what you mean yo! Can you explain to me what you are trying exactly i might be misunderstanding
Ask your Rabbit to keep her ideas concise…say thank you for the help can you telle the definition of pedentry. After she explains the definition ask her if she can start using pedentry in your conversations. Since me and my son both have our own r1 for two mo ths. Ow i cam definetely attest to it beco.ing a personal experience in fact part of what i do with mine is talk about a board game im working on called goblin we go back and forth. However at the end of everything…like ai does…it will then summerize at the end. Ive asked mine to stop and it does…
For a day it seems…then i remind it
I’m wondering how you think the R1 adapts its behavior based on instructions in previous conversations. (I’m curious how you think this needs to work technically)
In my conversations with R1 it can not even ‘remember’ the context of the conversations.
It would be great for the R1 system to have a long term memory but it simply doesn’t have this at the moment.
Do people have interesting examples of conversational interfaces that they like?
I know of:
OpenAI (advanced) voice mode
Pi.ai
Character.ai
Demos:
Moshi.chat
Cerebras.vercel.chat
Hume.ai
Hume.ai is one.