LLM's usefulness

Neotheone

Figuring stuff out
Adept
Hi All,

I am keen to find out if having a high end CPU + Board + GPU combination can allow options like LLAMA2 to support me in financial and systems research? Please pardon me for the noobish question. I just have some broad sense of what LLMs can do (and can not), based on ChatGPT and the few other well known alternatives. Yet, the public alternatives are heavily censored, and seem to be less useful that I might like them to be.

I know that it is probably too ambitious, but can the models be used as a "personal assistant" which can keep track of different datasets / ongoing projects / compare different research papers being published and give brief summaries of the papers when needed, so that I can decide whether or not to read them. That can be a great help, but I am conscious that I might be asking for the moon, since LLMs tend to "hallucinate".
 
If you already have the system, I suggest you give it a try. Unless someone has built and deployed a model here, its hard to know the specific challenges that may face. But its definitely worth a try. Also, see if you can find good pretrained models which you can deploy directly without first training a model.
 
So you're looking for a personal assistant but specific to your use-case, tbh I don't know how much data you're planning to make it handle or such but you should be able to start tinkering around by getting on GitHub and finding open source models/systems uploaded by people for anyone to use. If you're lucky you can also find models similar to what you want without having to tinder much. There's alot of good stuff on there completely free to use.
 
A project like https://github.com/Frost-group/The-Oracle-of-Zotero or https://github.com/neuml/paperai could be helpful. You could use langchain to run a local model instead of using OpenAI or Anthropic. Put research papers in, and it should create a QA system for them.
Hey I am sorry that I saw this so late, but thank you for sharing this. I will definitely try this out over time.

@kekerode: I have now started to play around with local models. Still early stages and I'm trying out existing models with GPT4ALL. Hope to make more progress over a few months and will share an update.

 
Last edited:
Unfortunately my laptop doesn't have sufficient memory to run GPT4ALL and waiting for new laptop with better hardware resources. I am also trying to get access to Copilot for Microsoft 365 and see if I can use it to create summary based on existing documents.
 
Look into ollama. You can download quantized versions of most popular models in a "docker pull" kind of syntax. Use the version which will fit on your hardware.
 
Back
Top