Sarah Lea | BSc BI & AI | Salesforce Consultant
banner
sarah-lea.bsky.social
Sarah Lea | BSc BI & AI | Salesforce Consultant
@sarah-lea.bsky.social
Tech simplified for curious minds. Covering topics such as Data, AI & ML and Python Programming.

Discover the step-by-step guides and hands-on practical guides.🚀
📩https://medium.com/@schuerch_sarah
👀https://towardsdatascience.com/author/schuerch_sarah
Reposted by Sarah Lea | BSc BI & AI | Salesforce Consultant
Understand how chunk size influences retrieval in RAG systems. @sarah-lea.bsky.social's new article dives into 3 experiments that show how small, medium, and large chunks affect results, helping you see why it's more than just a parameter.
Chunk Size as an Experimental Variable in RAG Systems | Towards Data Science
Understanding retrieval in RAG systems by experimenting with different chunk sizes
towardsdatascience.com
December 31, 2025 at 7:18 PM
Why does a RAG system return a wrong answer even when the correct text exists?

I built a Mini-RAG-System to explore how chunk size affects retrieval:
☕Small chunks lose context
☕Medium chunks look stable, but can pick the wrong Top-1
☕Large chunks are more robust, but coarser

bit.ly/3KTX4FQ
Understanding Retrieval in RAG Systems: Why Chunk Size Matters
Understanding Retrieval in RAG Systems: Why Chunk Size Matters A step-by-step retrieval guide using sentence transformers, chunk size and similarity scores. User: “How many vacation days am I …
medium.com
December 28, 2025 at 5:37 PM
Today the 1,500th reader joined, wow.

When I started writing on Medium, I just wanted to share a few thoughts about AI & Data Science.

Thanks for reading & following along.

Here’s the article that brought number 1,500 👇
code.likeagirl.io/how-to-study...
How to study Math-Heavy Topics like Reinforcement Learning
Understanding how our brains learns can make math-heavy subjects easier.
code.likeagirl.io
November 8, 2025 at 12:16 AM
Ever seen something like this in a course and felt your brain freeze?

∇J(θ) ≐ ∇Eπ [G ∣ s, a]

That moment of formula anxiety isn’t about being bad at math.
It’s about missing a method.

Once you treat symbols like a new language, the fear fades.

☕ Read the full piece: bit.ly/4qp0aBC
How to study Math-Heavy Topics like Reinforcement Learning
Understanding how our brains learns can make math-heavy subjects easier.
bit.ly
October 24, 2025 at 9:39 AM
Ever sat in class thinking: I have no idea how to learn this?
That’s how my 5 strategies for studying math-heavy topics began. Try it out and let me know:

☕ With Medium-Account: bit.ly/47ikxrp
☕ Friend-Link: bit.ly/4qp0aBC
October 20, 2025 at 11:01 AM
Today, a fellow computer science student told me he feels guilty when using ChatGPT and others, as if he’s not really learning anymore. As if he were cheating.

It made me pause.

I’m not sure what I think yet: whether AI tools make us learn less, or just make us learn differently.
October 15, 2025 at 7:52 PM
Today I stumbled on the central limit theorem once again. The name says it already: central 😉

☕ Take many independent random variables & average them.
→ You’ll get something close to a normal distribution.
→ Whatever the originals looked like.

☕ Essential for anyone working with data.
September 24, 2025 at 6:34 PM
What’s a CSV Plot Agent? I wanted to create an agent that automatically analyzes and visualizes data from a CSV. I built it using LangChain and Streamlit (two Python frameworks).

☕ Check out the step-by-step guide here: medium.com/towards-arti...

☕ GitHub repo: github.com/Sari95/CSV-P...
CSV Plot Agent with LangChain & Streamlit: Your Introduction to Data Agents
How you can learn the basics of tool-based agents with LangChain, GPT-4o-mini and Streamlit.
medium.com
September 22, 2025 at 7:22 PM
Reposted by Sarah Lea | BSc BI & AI | Salesforce Consultant
@taupirho.bsky.social breaks down the enduring value of Tkinter. He explains how the original Python GUI builder can still be used to create sleek, functional dashboards. Long before tools like Streamlit or Gradio.
Building a Modern Dashboard with Python and Tkinter | Towards Data Science
Create polished GUIs and data dashboards with this versatile library
towardsdatascience.com
September 12, 2025 at 4:02 AM
Reposted by Sarah Lea | BSc BI & AI | Salesforce Consultant
Struggling to automate your exploratory data analysis?? 😫 @sarah-lea.bsky.social's new article shows how to build a CSV sanity-check agent with @langchain.bsky.social that automatically inspects data for you. Learn to use a system that acts, not just talks.
LangChain for EDA: Build a CSV Sanity-Check Agent in Python | Towards Data Science
A practical LangChain tutorial for data scientists to inspect CSVs
towardsdatascience.com
September 10, 2025 at 2:05 PM
Can an agent automate your EDA?
I gave it a try with LangChain. And yes, it can generate descriptive stats, for example. Not groundbreaking, but a fun start into agent workflows.

Would you trust an agent with your data analysis?

🤓 On @towardsdatascience.com : towardsdatascience.com/langchain-fo...
From Manual EDA to AI-Powered Agents: A Hands-On Experiment with LangChain
Can an agent take over repetitive EDA tasks? A quick LangChain experiment.
medium.com
September 11, 2025 at 5:49 PM
☕ How does AI learn to choose wisely?
Start with exploration vs. exploitation and the Multi-Armed Bandit problem.
Simple, powerful and the perfect intro to Reinforcement Learning.
towardsdatascience.com/simple-guide...
Simple Guide to Multi-Armed Bandits: A Key Concept Before Reinforcement Learning | Towards Data Science
How AI learns to make better decisions and why you should care about exploration vs. exploitation
towardsdatascience.com
August 6, 2025 at 8:56 PM
I was looking for an easier way to write my bachelor thesis. And I tried a tool that surprised me in a positive way.

☕ No formatting chaos, much easier addition of citations and bibliography. Instead, I could concentrate much more on writing.

medium.com/code-like-a-...
The Smarter Way to Write Your Thesis: OneNote Meets LaTeX
Part 1 of a series for curious minds: From papers to research, discover how the right tools simplify (and maybe even supercharge) your…
medium.com
August 5, 2025 at 7:18 PM
Have you seen how ChatGPT’s Agent Mode can now add events directly to your Google Calendar? www.youtube.com/watch?v=xhAt...
I was surprised how smoothly it already works.
ChatGPT Agent Mode August 2025 Creating Events
YouTube video by Sarah Schürch
www.youtube.com
August 3, 2025 at 6:02 PM
ChatGPT made my study plan and added it to my calendar. It worked.

Two new modes show why 2025 is the year of AI agents:
☕ Agent Mode: Agents that act.
☕ Study & Learn Mode: Tutors that think with you.

Tried them yet? 👉 medium.com/p/77e5477efe59
August 2, 2025 at 7:30 AM
Yesterday my student asked:
“Why normalize a database? Isn’t one big table easier?”

A classic first question in relational DBs.
☕ One big table feels simple: Until you run into redundancy, anomalies & messy updates.
☕ Normalization means: Store each fact once, in the right place. Clean & reliable.
July 28, 2025 at 9:53 AM
Reposted by Sarah Lea | BSc BI & AI | Salesforce Consultant
How AI learns to make better decisions and why you should care about exploration vs. exploitation.

By @sarah-lea.bsky.social
Simple Guide to Multi-Armed Bandits: A Key Concept Before Reinforcement Learning | Towards Data Science
How AI learns to make better decisions and why you should care about exploration vs. exploitation
towardsdatascience.com
July 26, 2025 at 9:02 PM
To learn from experience, a reinforcement learning agent needs 4 elements:

☕State: What situation is the agent in?
☕Actions: What are possible moves from here?
☕Reward: What does the agent receive after an action?
☕Value function: How good is a state?

That’s how RL agents learn by trial & error.
July 26, 2025 at 6:48 AM
A baby doesn’t read a manual to learn to walk. Neither does an AI agent.

Reinforcement Learning (RL) isn’t about knowing the answer. It’s about learning through interaction.

That’s how AlphaGo beat a world champion:
It first learned from expert games. Then it played itself, over & over again.
July 23, 2025 at 6:25 PM
Which strategy do you use to learn something new?

Multi-Armed Bandits use 3:
☕ Greedy: Stick with what works.
☕ ε-Greedy: Try new things sometimes.
☕ Optimistic: Assume it’s all good — at first.

Which one sounds most like you?
July 21, 2025 at 7:27 PM
LLMs don’t know your PDF. Or your wiki.

What they can do with RAG is search your docs in the background & answer using what they find. Simple, but effective.

How?
☕Chunking splits the doc into smaller parts.
☕Embeddings turn them into vectors.
☕Retriever finds matches. LLM writes the answer.
July 20, 2025 at 4:37 PM
Want to really understand how RAG work?

Then stop reading theory & build your own chatbot with:
☕ LangChain
☕ FAISS (vector DB)
☕ Mistral via Ollama
☕ Python & Streamlit

Follow this step-by-step guide:
👉 medium.com/data-science...

Comment WANT if you need the friends link to the Medium Article.
RAG in Action: Build your Own Local PDF Chatbot as a Beginner
Understanding chunking, embeddings and vector search better by building a PDF chatbot with LangChain, Ollama and Mistral.
medium.com
July 19, 2025 at 2:44 PM
🍕 Always pick the best pizzeria so far? That’s the greedy strategy.

But what if there's a better one you never tried?

Multi-Armed Bandits explore this dilemma: With strategies like Greedy, ε-Greedy & Optimistic Initial Values

☕ → towardsdatascience.com/simple-guide...
Simple Guide to Multi-Armed Bandits: A Key Concept Before Reinforcement Learning | Towards Data Science
How AI learns to make better decisions and why you should care about exploration vs. exploitation
towardsdatascience.com
July 17, 2025 at 3:09 PM
Reinforcement Learning starts with a simple but powerful idea: Trial & Error. Learning what works.

The Multi-Armed Bandit problem is a first step into this world.
It's not just about slot machines. Iit's about how AI (and humans) learn to choose.

towardsdatascience.com/simple-guide...
Simple Guide to Multi-Armed Bandits: A Key Concept Before Reinforcement Learning | Towards Data Science
How AI learns to make better decisions and why you should care about exploration vs. exploitation
towardsdatascience.com
July 16, 2025 at 7:40 PM
Do you always go to the same café? Or do you try something new?

That’s the exploration vs. exploitation dilemma.

Multi-armed bandits model it.

Kahneman called it one of our core patterns of decision-making.

🎰 Read the full article @towardsdatascience.com: towardsdatascience.com/simple-guide...
Simple Guide to Multi-Armed Bandits: A Key Concept Before Reinforcement Learning | Towards Data Science
How AI learns to make better decisions and why you should care about exploration vs. exploitation
towardsdatascience.com
July 15, 2025 at 7:35 PM