I thought that the ai in R1 would be learning constantly and working out how to find and sort information.
It is therefore interesting to understand why it can’t find and play a song on an open platform (such as YouTube) but just returns that it hasn’t got access to a subscription music service.
Why can’t R1 access publicly available data?
So the r1 can access updated information, you just need to nudge it to do that.
Saying stuff like “look up the …” or what not usually do the trick.
Outside that, it’s easy to misunderstand that altho we commonly call this “AI”, it’s not really ai.
Since It’s not “learning live”, but just trying to use the context of the conversation, and it’s ability to predict the best next words based on it’s additional training rounds, to get better answers and closer to what the person wants.
Looking forward to teach mode, for the utility of it
Use the rabbit a while or do some notices… after a couple of hundred entries into your “rabbithole journey” say … search my Journals for … and give me a summary… etc … and tell me also when the entries got created.
Than you see that he maybe nit learn as expected… but is a perfect 2nd brain for daily use.
Currently the r1 is an LLM in a box with a couple of LAM features. It can’t quite learn anything as the other people have said here. Its a device that thankfully will change as we go and once the teach mode goes into testing of come kind, we will see just how far that might go!
thats totally accurate. r1 at this point is based on script, using APIs from other Ai models such as GPT or Perplexity … its not itself an AI model
the recall function is a simple search in a data base ( your journal/rabbithole ) all trigger by a NLM ( natural language model ) voice input of the user.