Taming LLMs – A Practical Guide to LLM Pitfalls with Open Source Software

https://www.souzatharsis.com/tamingLLMs/markdown/toc.html

By sebg at

huqedato | 3 comments | 2 weeks ago
I took a brief look into this guide. What surprises me is that it sounds like being generated with AI. Am I the only one who thinks so?

Just read this paragraph: "In conclusion, while managing output size limitations in LLMs presents significant challenges, it also drives innovation in application design and optimization strategies. By implementing techniques such as context chunking, efficient prompt templates, and graceful fallbacks, developers can mitigate these limitations and enhance the performance and cost-effectiveness of their applications. As the technology evolves, advancements in contextual awareness, token efficiency, and memory management will further empower developers to build more robust and scalable LLM-powered systems. It is crucial to stay informed about these developments and continuously adapt to leverage the full potential of LLMs while addressing their inherent constraints."

tarboreus | 0 comments | 2 weeks ago
LLMs love to sum things. up. On the one hand, this. On the other hand, that! It is very important. Challenges!
simonw | 1 comment | 2 weeks ago
It feels a little risk to me to construct a book like this with so many LangChain examples - my impression of LangChain is that it's still a project with a quick rate of development that might not stay stable, though maybe I'm wrong about that.
zbyforgotp | 4 comments | 2 weeks ago
LangChain is great for finding out who has a clue.
WesleyJohnson | 0 comments | 2 weeks ago
How is this helpful? Not everyone can keep up with the rapidly shifting world of AI; it's worse than JavaScript. People have day jobs that don't revolve around AI and don't have the bandwidth to devote endless hours trying to keep up, while still getting work done that makes money for the business they're in.

I read through the article and it provided a lot of helpful information on the pitfalls with LLMs which, based on the title, is its intended purpose. I didn't take it as a shining recommendation for LangChain. If your point is that suggesting LangChain means they're not keeping up, as someone else stated, and because of that the other information is probably dated as well - that's far more helpful.

I wouldn't mind seeing a current guide on pitfalls and practical examples of interacting with LLMs with raw python - from someone who "has a clue".

pkkkzip | 3 comments | 2 weeks ago
i think this is an excellent litmus test. Anyone caught pushing langchain signals pedantry and should be completely ignored.
a_bonobo | 2 comments | 2 weeks ago
Could you both expand on what you mean? I've built some useful stuff with langchain v1 but gave up on porting it to v2, looked like too much work.
tonyoconnell | 2 comments | 2 weeks ago
Langchain abstracts too much and you can't really see what's going on or control the flow of data with real precision. They are fixing that though and now you have much better visibility into what's being inferred. I think Langchain is pretty useful though, especially if you want to integrate with something quickly.
_neil | 0 comments | 2 weeks ago
I think this is the reasonable answer. Langchain gets a lot of derision and rightly so but it does have uses for prototyping. It’s also a good resource for learning the landscape specifically because of the integrations. I haven’t used it in a while so I’m not familiar with the most recent updates.
mavelikara | 0 comments | 2 weeks ago
Thanks for explaining.
d4rkp4ttern | 0 comments | 2 weeks ago
Indeed, it has become the LLM equivalent of IBM, as in -- "No one ever got fired for choosing LangChain". A certain well-known ML person even runs a course on LangChain, as if it's a "fundamental" thing to know about LLMs. I was also surprised/disappointed to see that the otherwise excellent "Hands-on Large Language Models" book from O'Reilly has extensive examples using this library.

In Apr 2023 we (CMU/UW-Madison researchers) looked into this lib to build a simple RAG workflow that was slightly different from the canned "chains" like RetrievalQAConversation or others, and ended up wasting time hunting docs and notebooks all over the place and going up and down class hierarchies to find out what exactly was going on. We decided it shouldn't have to be this hard, and started building Langroid as an agent-oriented LLM programming framework.

In Langroid you set up a ChatAgent class which encapsulates an LLM-interface plus any state you'd like. There's a Task class that wraps an Agent and allows inter-agent communication and tool-handling. We have devs who've found our framework easy to understand and extend for their purposes, and some companies are using it in production (some have endorsed us publicly). A quick tour gives a flavor of Langroid: https://langroid.github.io/langroid/tutorials/langroid-tour/

3abiton | 0 comments | 2 weeks ago
It's unreasonable to expect that everyone is on top of every latest LLM advancement.
knowsuchagency | 0 comments | 2 weeks ago
Agreed. I wrote my own LLM abstraction library in a few hundred lines of code: https://github.com/knowsuchagency/promptic
factormeta | 2 comments | 2 weeks ago
From a developer with extensive database ETL experience, is it still necceary to learn or use LangChain? Would it be just easier to load directly to vector db?
lgas | 1 comment | 2 weeks ago
It was never necessary to use it. It's handy for exploratory work and studying thee patterns of how they glued stuff together behind the scenes, but once you know what you want to build it's another bloated abstraction that's just in the way.
mritchie712 | 0 comments | 2 weeks ago
> bloated abstraction that's just in the way

I agree with this, but would argue it's not even useful for exploratory work. Most of it's function can be generated in a single prompt for your use case.

gr3ml1n | 1 comment | 2 weeks ago
LangChain was the first big attempt at a cohesive LLM application framework. As a result, it's terrible. If someone is seriously suggesting using it, they aren't keeping up.
Uehreka | 4 comments | 2 weeks ago
So what are the people who are “keeping up” using? No one ITT is saying what the thing that replaced LangChain is.
jackmpcollins | 0 comments | 2 weeks ago
If you are using Python, check out the package I've been building, magentic https://github.com/jackmpcollins/magentic It supports structured outputs and streaming, and aims to avoid making unnecessary abstractions (but might require some more understanding of LLM patterns as a result).

Also recently released is pydantic-ai, which is also based around pydantic / structured outputs, though works at level of "agents". https://github.com/pydantic/pydantic-ai

oersted | 1 comment | 2 weeks ago
Frankly, just Python. LLM generation is just a function call, fetching from a vector db is just a function call.

LLMs are hard to tame at scale, the focus is on tightly controlling the LLM inputs, making sure it has the information it needs to be accurate, and having detailed observability over outputs and costs. For that last part this new wave of AI observability tools can help (Helicone, Langsmith, W&B Weave...).

Frameworks like LangChain obscure the exact inputs and outputs and when the LLM is called. Fancy agentic patterns and one-size-fits-all RAG are expensive and their effectiveness in general is dubious. It's important to tightly engineer the prompt for every individual use-case and to think of it as a low-level input-output call, just like coding a good function, rather than a magical abstract intelligent being. In practice, I prefer to keep the control and simplicity of vanilla Python so I can focus on the actually difficult part of prompting the LLM well.

spacemanspiff01 | 2 comments | 2 weeks ago
What are your thoughts on dspy?
oersted | 0 comments | 2 weeks ago
It has caught my attention, I keep hearing about it as a new industry standard, I keep meaning to try it.

The reason why I keep procrastinating it is that, again, experience has shown me that LLMs are not really at a point where you can afford to abstract away the prompting. At least in the work I have been doing (large-scale unstructured data extraction and analysis), direct control over the actual input string is quite critical to getting good results. I also need fine-grained control over costs.

The DSPy pitch of automagically optimizing a pipeline of prompts sounds costly and hard to troubleshoot and iteratively improve by hand when it inevitably doesn't work as well as you need it to out-of-the-box, which is a constant with AI.

But don't get me wrong, I know I sound quite skeptical, but I intend to keep giving all these advancements a serious try, I'm sure one will actually be a big upgrade eventually.

NeutralCrane | 0 comments | 2 weeks ago
In my opinion, utterly useless and I put it in the same bucket as Langchain. Lots of grandiose claims but doesn’t actually solve any problems people have.

I think we are at a stage where people are so eager to build something around LLMs to become the next shovel-maker, that a lot of what is being built doesn’t actually serve anyone’s needs.

NeutralCrane | 0 comments | 2 weeks ago
To be honest, I think a lot of people realize that what Langchain is doing is providing a small amount of value in the form of a huge amount of abstraction, which means it can be convenient for very simple off the shelf solutions, and a huge headache for anything else. Most people realize the value add if Langchain can be recreated with a few lines of code, and end up just building their own.
baobabKoodaa | 0 comments | 2 weeks ago
> No one ITT is saying what the thing that replaced LangChain is.

LangChain never solved a real problem to begin with, so there's nothing that needs to be replaced.

Just write your own Python code that does the same thing that LangChain needs 10 layers of abstraction to do.

constantinum | 0 comments | 2 weeks ago
The chapter where there is a comparison of techniques for structured data extraction is insightful.[1] Does anyone wants to explore more on the structured data extraction techniques, do refer to this piece [2]

[1] https://www.souzatharsis.com/tamingLLMs/notebooks/structured...

[2] https://unstract.com/blog/comparing-approaches-for-using-llm...

agnishom | 0 comments | 2 weeks ago
So this is the modern day equivalent of an O'Reilly book with the title "Mastering LLMs"?
msp26 | 1 comment | 2 weeks ago
Looks fantastic, thanks for the deep dive on structured output. Will read thoroughly.
msp26 | 0 comments | 5 days ago
Actually having read it now, there's way too much fluff. This could be way shorter.
parmesean | 0 comments | 2 weeks ago
Fantastic!