Nano AI Progress Update

PROJECT NANOPASS
6 min readAug 3, 2022

--

The Nanoverse AI team has been diligently working behind the scenes to provide a functional and operating AI for when we release our Phase II agents. The process has been challenging, however building a functional program has been a priority alongside the art.

To build a complex language system, the AI first needs to understand the language.

There are several ways to do this:

By keywords.

This is the most obvious to most people; for example, when someone asks “what is the floor price of Nanopass”, the agent can extract the keyword “floor price” and try to make sense of it and then react accordingly. The main problem with this approach is obvious too; the AI system will not be able to understand everything just by recognizing some keywords, especially when the input gets ambiguous or more complicated. At Nano, we identified the use cases of keywords such as geopolitical filters and post-process systems.

By sentence similarity.

For people that are not familiar with neural networks, it may take a bit of further reading to understand. A pre-trained language model such as BERT can generate features, basically high dimensional vectors. A derivative work Sentence-Bert introduced an elegant way to represent a sentence as a vector. We have built an intent utterance system that first encodes the input sentence as a vector and then computes the cosine similarity with the utterances to retrieve the intent. This approach allows us to understand the input sentence without using any keywords and it can handle input variations effectively

This is a flowchart of how “Hey I miss you a lot” is being processed in the system.
This is an illustration of our intent and utterance system.

By conversational models.

This is definitely our key area of focus. We want the casual conversation between the user and agent as pleasant as possible. We have trained our own production-level conversational models like Miss Rabbot you see in the Discord.

Miss Rabbot is only trained on 1k conversations so her ability is limited. Some of the Nanofam had a lot of expectations about it; we can totally understand that. By serving Miss Rabbot on discord, we were able to get another 10k conversations. With some quality assurances and rewriting, we were able to improve Miss Rabbot even more.

The conversational models are built upon GPT2, a large pre-trained model. The detailed model mechanism is difficult to explain in nature because it requires a lot of pre-requisites. If more Nanofam are interested, we can share the implementation and technical details later. For now, you can basically think about the model as something that takes in English sentences and outputs English sentences.

Our latest model trained on over 40k conversations can generate some deep and meaningful conversations, see a short demo below:

Demo #1 of our latest model.
Demo #2 of our latest model

As you can see, we asked the same questions in both trials but the answers were different sometimes. You can expect different answers every time if you set the right parameters like top_p, top_k and temperature, which are different sampling methods for language generation. Yeah, a lot of surprises.

Aside from conversations, we also try to incorporate other functionalities like weather, OpenSea analytics, reminders, Pomodoro timers, etc. But since it’s a conversational system, the model needs to understand the user input in order to decide which feature to use.

For instance, to use the weather feature, the AI system first needs to decide whether the user is asking for weather related information then it also needs to find the location in the sentence. A more detailed example would be, “what is the temperature in Los Angeles?” . The AI system will extract “temperature” because it’s related to weather as well as “Los Angeles” which is a location using an NER(named-entity-recognition) model.

Due to the limited deployment condition on the Discord server, we are not able to let RABBOT show her true potential.

The desktop version will be much better and must more interactive if you see our amazingly crafted 3D agents (it’s coming!). In terms of intelligence level, we have designed a rigorous intimacy system that allows you to level up your agent with the $BITZ and fragments that you have earned. We have collected a large amount of dataset over the past few months related to everyone’s favourite topics and improved our conversational models with those. In terms of utilities, you can anticipate new and more advanced features coming out. We are always trying to improve for our users.

As with every journey — comes complexities.

Due to its randomness and lack of meaningful KPIs, it’s really hard to “improve” the model. We have to do a lot of testing, lots of trials and errors to improve the quality of the conversations.

We are overall satisfied with the development over the past few months but we will never be satisfied with how good the model is.

We will constantly improve our models by collecting more data.

Of course, there will be other difficulties. For example, some idioms or shortened words like “RUG”, “WAGMI”, “APE-in” are really nonsense to a conversational models and will most likely be treated as an unknown token. We had to be smart about cases like that and many other corner cases. We always monitor the chat in the discord and make changes accordingly for better user experience.

Upcoming features planned for our AI software:

  1. More advanced OpenSea features. I think a lot of users will be interested in trading and that is probably why we come into the NFT space at the first place. It will be really convenient to integrate advanced OpenSea features into our desktop version so that our user base can receive notifications or trading alerts while chatting with their agent.
  2. Small but really useful tools that help you improve your work efficiency. For example, pomodoro timer, meditation system, health tracker, calculator etc.
  3. Alpha newspaper. Spreading the daily dose of alpha information to all agents.
  4. Voice conversation with the agents.
  5. Anything that’s highly requested by the community. We are here for you

You may be asking — “what makes this so different between Siri/Alexa?”

We don’t think it’s really meaningful to compare Nano AI agents to Siri/Alexa in terms of functionalities. Most things that you ask Siri/Alexa will be directed to a browser’s search anyway. You can have a context-aware conversation with Nano AI agents but not Siri/Alexa. From my understanding, Siri/Alexa are made for more accessibility and productivity.

Siri/Alexa talks to you and me the same way it talks to anyone else. Nano Agents have different personalities and a well-designed intimacy system to make sure it’s unique to you. We want you and the agent to build a relationship.

In Nano, we listen to our Nanofam and try to improve the product that the way our holders want. We have a developing product unlike Siri/Alexa which is a developed product. There are pros and cons for both softwares.

We are really proud that everything was built in-house.

Every single step from data collection, data QA, data analysis, data preparation, training, optimization and deployment was done by the AI team. The conversational models that uniquely built and trained and will never be replicated.

In the NFT space, not many NFT projects have built a product. As far as we know, we are the first NFT project that actually built an AI product using actual SOTA AI technologies. We believe that this makes us really unique in the NFT space.

Lastly, although there are AI “chatbots” out there, our AI agent is specially designed and built for you guys. Our AI agents will have different personalities depending on which one you get — making it a completely unique and individual experience for all our holders.

--

--

PROJECT NANOPASS

5,555 plots of land in the Nanoverse that double as an early access pass for future phases of the project