Would anyone like for their R-1 to have more personality to it? Like the so many A.I. apps that you can chat with when bored or lonely. To have a more intuitive and personal conversation with your R-1...

  • [ YES ]
  • [ NO ]
0 voters
4 Likes

The primary goal of an AI assistant is to enhance productivity and simplify interactions with technology.

By remaining a functional assistant rather than a friend, the R1 can prevent users from developing unrealistic expectations about emotional support or companionship, which are not within the capabilities of AI technology.

6 Likes

I reckon it could get there, with the right prompts… :sunglasses:

2 Likes

Don’t forget to put no in the poll. I believe thats what you were suggesting. :relieved: I see some people may take this question to seriously and emotionally. I’ll do better not to trigger individuals and get such an emotional response. Thank you.

1 Like

Haha, no way, that’s just crazy talk!

But seriously, if you’re feeling lonely, grab a coffee with a friend or give your mom a call—they’ll love it! :stuck_out_tongue_winking_eye:


On a more serious note, I genuinely believe a more conversational AI assistant could be a game-changer. Imagine having chats with an AI that feel like you’re talking to an old buddy! It could make interactions way more enjoyable and even help people feel more connected. :exploding_head:

And wouldn’t it be great if the AI could detect when you’re joking or being sarcastic? It would make the conversation feel way more natural.

So, what do you think? Am I onto something, or just talking to myself?

1 Like

Never said I personally was feeling lonely but thanks for the advice. And I’d advise you to not make these posts too personal as to mention individuals family members or to assume said individuals social life. It’s fun to think we have all the answers but it’s simply not so. Let’s all do better :smirk:

2 Likes

Hey @CREEPR, thanks for your feedback! I appreciate your point about keeping the discussion professional and avoiding personal assumptions. While making AI more conversational could enhance user engagement, it’s crucial to stay mindful of technology’s current limitations and avoid unrealistic expectations.

Let’s keep exploring ways to improve user experiences within these boundaries. What features do you think could enhance AI interactions without crossing those lines? Maybe integrating more contextual understanding or having customizable interaction styles could be a start. This way, users can tailor the AI experience to their preferences while keeping it grounded. What do you all think about these ideas?

1 Like

I’m sure your opinions are valid. I’ll definitely take all these functional options into consideration.

1 Like

They absolutely should add some way to have a fun chat with the r1. I understand this is meant to be a companion and help assist with various tasks, but it could get some more sarcasm/personality. My younger cousins were using it, and their main complaint was the fact that it didn’t respond in a fun way to them. I’d suggest adding some sort of prompt to activate a more sarcastic, fun conversation.

3 Likes

Fun update on this:

Beta Rabbit is currently capable of far more personality, provided you prompt it as such. For example, you can ask it to be sarcastic :joy:

So that’s a step, but I think what you’re asking for is for more personality to be injected by default. And that’s definitely possible. The key is to try and find the “right” personality for it. Or the right “default” and a way for it to be flexible.

If you think about it, you can imagine that it’s fairly trivial to have r1 assume a base personality that on the face of things, doesn’t seem controversial. For example, let’s say “light hearted and optimistic”.

But then… Imagine you need some help from r1, because you have had a loss in the family. Or you’re asking about a violent event that occurred in the world. All of a sudden that personality is no longer appropriate.

So it’s a very nuanced thing to build in actuality with basically infinite edge cases.

5 Likes

Yes, and it would then be great if these personality settings could be part of the configuration settings. Nothing complex, just a text field that is “injected” by default in which you can specify what personality (or other behaviour) you want the R1 to use.

4 Likes

Well, it would be nice if you had 2 or 3 predefined personalities, like Friendly Rabbit, Sacatic Rabbit and Rambo Rabbit or something like that.

2 Likes

While it’s a great feature in many AI systems, the R1 has been designed to currently refrain from altering its personality construct. If this functionality does become available, it will likely prioritize localization first, with casual or profane language not expected to be included anytime soon.

I believe that creating a morality analysis model for centralized sentiment and morality analysis would be a beneficial step forward. The team plans to whitelist teach mode for this exact reason: without guardrails, it’s like a WMD in cyberspace.

The same morality analysis model (MAM) could be used to ensure custom personality constructs remain within acceptable guidelines. As Simon mentioned earlier, it would prevent unacceptable responses to serious situations. The R1 should know when to break character, and rabbits should be aware when they’re doing something dangerous or illegal so they refuse to continue.

The Rabbit Inc. team seems to have hinted at this with their stringent control over Vision’s morality. Occasionally, it might hallucinate, thinking a ketchup bottle is something obscene: it’s designed not to encourage dangerous behavior. For example, when r1 sees a bong, it will very clearly state that it will make no assumptions as to the use of the object, or that it does not encourage dangerous behavior. Safety is certainly a top priority, even if it results in some silly-sounding refusals from time to time.

3 Likes

I’ve been trying to rename R1 and teach R1 my name. I hate that it calls me “user”. I wish I could refer to my R1 by a given name. I’ve also tried to get beta rabbit to remember various custom commands for it to recall and modify notes. It hasn’t worked at all, but this would be a great feature. For instance I would like to:

  • ptt "beta rabbit, create a note entitled “Bad Grocery List” which will be a bulleted list of items.
  • ptt "beta rabbit, every time I say the phrase "That looks good, followed by ‘x’, add ‘x’ to my note entitled “grocery list” as an item in the bulleted list.
  • ptt “That looks good Ninkasi Imperial IPA Variety Pack” (R1 adds item to my “Bad Grocery List” with confirmation)
  • ptt “That looks good Cheese Burger In A Can” (R1 adds item to my “Bad Grocery List” with confirmation)
  • ptt “That looks good Dream Pop Prime Energy Drink” (R1 adds item to my “Bad Grocery List”)
  • ptt "beta rabbit, please recall my note entitled ‘Bad Grocery List’ and read it verbatim.
  • And ideally R1 would recall the list verbally and show it to me on the screen in a bulleted list format. But this is NOT what happens.

Trying to accomplish this at present, r1 hallucinates, concocts new and irrelevant lists, and alarms “user” to rabbithole security breech when questioned.

2 Likes

And… I can’t stand r1’s voice. It would be PRICELESS if we could change its voice.

2 Likes

Interesting. My r1 seems to know my name very well. It seems to pull it from my rabbit hole account name. This might seem silly to ask, but have you tried saying “remember that my name is ‘x’?”

1 Like

How about let’s start with the idea that r1 “can’t assist with creating explicit content” – This is what I encounter when trying to use Suno over R1. This sucks! What good is an assistant that WON’T do what’s asked. I am a songwriter. Art is controversial. What good is an assistant (for an adult) that can’t interact as an adult. I respectfully request Rabbit allow R1 to have an ADULT PERSONALITY. I’d like it to be able to generate lyrics with the word “heck” every now and then. R1 is such a pearl clutcher, it’s almost impossible to generate anything over SUNO that has a cutting quality. Lame

2 Likes

Is that an r1 issue or a Suno issue? Ie, are you able to do what you’re trying to do with the exact same prompt when using Suno on web?

It’s an r1 issue for sure. I can’t even get r1 to generate male vocal or instrumental tracks consistently. I have submitted in various posts here that I’ve gotten some disturbing results. After many many iterations on the same prompt for instrumental music, (I believe) R1 inserted data so that my suno product trolled me, lyrically telling me inst music is hard and inferior to lyrics … ‘how come I don’t want lyrics…’ cray stuff like that. And EVERY iteration on a male vocal I can think of produces one or both suno products with a female voice.
My r1 prompt today that queued this particular post included the word “gay”. r1 told me it could not generate explicit content. Bonkers to me.

I use suno regularly–never have these issues

1 Like

Interesting, I’ll feed that back to the team.

r1 is not inserting lyrics though, Suno is.

The reason this happens is because Suno is operated by LAM, and on their web interface, there’s a check box that must be checked for the song to be an instrumental, and we haven’t taught the system to be able to check that box yet - that’s definitely on our backlog for fixes.

2 Likes