What do you use your R1 for?

Just watched the video - looks great! I will have to try that!

Why don’t use google or Alexa? I feel at this point are way ahead of Rabbit. I had large hopes for LAM that one could expand to do what Alex or Google can’t do. But at this point this is way inferior product and results, with Alexa I can control most things around my house, with the Rabbit I can create pretty images and that is all. The whole sales pitch for me was the teaching mode and without that this is just a nice novelty.

1 Like

Can you explain how you summarize emails?

1 Like

Absolutely!

Turn on vision mode, and make sure you can line the text up to the visible area. And just ask “Can you summarise this email for me?”

I am still working in a way to “hack” it to email me the summary. The only thing it will email me is a spreadsheet, so I am trying to work out wording.

If you have multiple pages you may have to paste it into a word doc so the pages are side by side before you summarise.

2 Likes

I’m really disappointed too. I bought this piece of tech to be more productive at work. I just want to convert recipes and want it to help with invoices. I always try my r1 first and when it inevitably fails, I pull out my phone and use my GPT app and it does what I ask without hesitation. I feel your frustration.

Fantastic! Thanks! :pray:t5:

2 Likes

It is early days, no newbie competitor has beaten google search as yet, hope they exceed them eventually with this product

Although the rabbit r1 does not have many functions yet, today I tried the function ‘record a meeting’.

On the rabbit-hole portal I could hear/see the meeting record (over 30 minutes) as well, also the text-summary was very clear.
The summary could have more text-details, but I find this function helpful at the beginning of our rabbit r1 adventure.

In the future it would be very helpful to have some filters in the ‘rabbit-hole’:
>>idea/suggestion: Filter in rabbit-hole

1 Like

Like you, I felt it was more a novelty, which is a little disappointing. I understand it has yet to grow and develop, but feel that Jesse’s demos were misleading as they give the impression that a lot of the things he was doing were already available.

Personally, I signed up knowing that it was yet to develop, but felt there was no need to pretend it could do more than it could.

I have found one useful thing it could do yesterday. I’m a member of Amazon Vine, which sends me products for free to review. I asked the Rabbit to write me the reviews, which it has done and has saved me a lot of time. However, about every third review, it tells me it won’t do it for ‘ethical’ reasons. so it’s very inconsistent.

The ‘ethical’ thing is a real annoyance, and I’m going to start another thread to ask about that…

1 Like

Testing/curiosity. Is it better in answering any question or problem using Perplexity Ai, compared to my 5 year old smart phone?

I just today had it scan a handwritten 3 column spreadsheet with 12 items per column and it scanned it and in less than 30 seconds it told me to check my email. I opened it in my email and it had everything correct.

I use mine mostly for Magic Camera, but enjoy asking it questions and its potential other uses. Early on I had some disappointment trying to get it to look at hand written notes and transcribe them for me— I got over it though. I really like what they’ve done, and I love that there aren’t subscriptions. LAM’s potential seems cool, but it doesn’t matter as much to me. For $200 magic camera and a superior alternative to Google is money well spent.

I like seeing what some of you are doing with yours and feel inspired to get mine doing more.

How I use it:
Work:
It is in console mode, with a USB keyboard I switch and ask questions that remain on the screen. The exact same questions I would ask to ChatGPT, Perplexity, or WolframAlpha, but all in the same screen without losing context.

Meeting:
It does a good summary and saves all the audio for up to two hours, tried it a couple of times, it’s cool and useful.

Recap:
Recap of emails, Excel on the fly via photo. It seems silly but I use it a lot :smile:. Mind-blowing to open a CloudWatch dashboard and get a report in human language.

Spotify:
Yes, I control Spotify with it (no, I don’t listen to it from the computer).

Photos:
I take a lot of photos to remember things, I simply switched to using it instead of my phone.

Leisure:
I mess around with Midjourney.
I let my daughter play with Suno and Midjourney.
I ask silly questions while in the bathroom.
Various conversations, recipes, etc.
Gardening, it skillfully recognizes plants.

Pros:
The design is alien, really beautiful, even the interface. Too bad the UI isn’t that responsive and the UX is a bit aggressive in some choices.

What I would like:
Pomodoro tools, timing, and other assistant features that could be great like a calendar, messaging and similar, let me say, same assistive features of apple watch would be enough.
The wheel should be clickable.

What sucks?
Well, it’s like Cyberpunk, it started badly, but there is hope. And I hope it ends the same way. Anyway, its limits are more those of generative AI itself rather than the device…

What I need
I don’t care for LAM, I want to program my browser “teachmode” with playwright, cypress, whatever and automate tasks my self using R1 to launch routines

2 Likes

I’ve found it to be especially useful for these things:

  1. Recap current world news and specific situations like the Ukrainian war
  2. “Google” things on the fly
  3. When I do something for work I usually ask the R1 first to get something going
  4. Ask it for recipes

Honestly the list goes on and on. After each update I find something new to use it for.

What I’ve used the least is LAM features and the camera. Honestly, not really for me. At least not until teach mode comes out, that will be a game changer for me.

1 Like

I ask the r1 one to “tell me a new joke” everyday. For the last month, the same jokes! When I get a new joke, progress ;>). Ask your r1 for a joke everyday…

I simply ask it random questions that surface in my mind throughout the day. Not a very exciting answer but I think it comes down to context.

My phone is 8 years old (which surprised me when I fact checked it just now!), and can’t run ChatGPT or any other fancy AI app, so it turns out the R1 is like a substitute device for doing voice chat with an AI. I acknowledge that logically the £200~ could have gone towards a new phone, but I’m kinda fond of it and not quite ready to replace it yet.

I dunno, I have one so I guess I’ll try to use it. Always inquisitive about something.