Artificial Intelligence

Precision Search and Safer Interaction Powered by AI

Inside our platform lies three different state-of-the-art LLM models.

The first and most important one is the LLM search. While conventional encoder-decoder combinations excel in both understanding the context of large prompts and next-word prediction (generating new content), the specific needs of our platform in regard to search are slightly different.
An intelligent "free text" search engine should understand complex prompts such as "all doge-related meme coins on the avalanche network.", Most of this search functionality relies on the so-called "embedding" layer of next-gen LLM models. For this reason, Altcoinist selected a state-of-the-art architecture that was designed to excel in the speed of embedding - E5.
E5 ( allows the Altcoinist platform to match the similarity of project details 40x faster than ChatGPT and other LLM competitors as it does not utilize the next-word-prediction pipeline from the decoder layer.
The second LLM used in the platform is the toxicity filter, consisting of 3 sub-models. An original BERT model (an encoder-heavy model that favors contextual problems such as text classification) and a BERT model fine-tuned on the Jigsaw Toxic comment challenge dataset. The final toxicity filter AI has been trained on these two models simultaneously until it converged into a high probability of differentiating between them- successfully filtering out the "toxic" version of BERT.
We plan to evolve this filter model to include project shilling and scam filtering based on the same training logic.