Vec2Vec: A Compact Neural Network Approach for Transforming Text Embeddings with High Fidelity. (arXiv:2306.12689v1 [cs.CL])


Vec2Vec: A Compact Neural Network Approach for Transforming Text Embeddings with High Fidelity. (arXiv:2306.12689v1 [cs.CL])
By: <a href="http://arxiv.org/find/cs/1/au:+Gao_A/0/1/0/all/0/1">Andrew Kean Gao</a> Posted: June 23, 2023

Vector embeddings have become ubiquitous tools for many language-related
tasks. A leading embedding model is OpenAI’s text-ada-002 which can embed
approximately 6,000 words into a 1,536-dimensional vector. While powerful,
text-ada-002 is not open source and is only available via API. We trained a
simple neural network to convert open-source 768-dimensional MPNet embeddings
into text-ada-002 embeddings. We compiled a subset of 50,000 online food
reviews. We calculated MPNet and text-ada-002 embeddings for each review and
trained a simple neural network to for 75 epochs. The neural network was
designed to predict the corresponding text-ada-002 embedding for a given MPNET
embedding. Our model achieved an average cosine similarity of 0.932 on 10,000
unseen reviews in our held-out test dataset. We manually assessed the quality
of our predicted embeddings for vector search over text-ada-002-embedded
reviews. While not as good as real text-ada-002 embeddings, predicted
embeddings were able to retrieve highly relevant reviews. Our final model,
Vec2Vec, is lightweight (<80 MB) and fast. Future steps include training a
neural network with a more sophisticated architecture and a larger dataset of
paired embeddings to achieve greater performance. The ability to convert
between and align embedding spaces may be helpful for interoperability,
limiting dependence on proprietary models, protecting data privacy, reducing
costs, and offline operations.

Provided by:
http://arxiv.org/icons/sfx.gif

DoctorMorDi

DoctorMorDi

Moderator and Editor