I’ve been working with Claude for some time now to build a Data Platform on MS Fabric and a predictive model for Target Demand for a large bicycle rental company in the Canary Islands. While digging deeper into Data Science, Claude helped me learn fast – and work smarter.
I also learned how to communicate with Claude, how to ask for task execution (explanations, know-how transfer, code generation, QA), and how to challenge the responses effectively.
For every real-world project I setup a Claude Project (I call it Agent) where all needed context, role, expertise for the Agent is specified and detailed information about the goal, requirements and environment of the project is provided.
A key insight: you can create a living Semantic Layer by using Project Instructions on multiple levels and from two perspectives.
Separation of Concerns
- System prompt – Behavior & Navigation
WHO the agent is, HOW to behave and interact, WHERE to find details. - Project files – Knowledge & Specifications
WHAT the technical details are, WHAT the data structures look like, WHAT must be done, and WHAT has already been tried and learned.
Project files as a semantic layer
Within the project files, build a metadata layer around your base data model and let it evolve:
- Base data model metadata: tables, columns, datatypes, relationships, business meaning.
- Newly created tables: add and version them for reuse later in the process.
- Requirements and documentation: goals, constraints, acceptance criteria, decisions.
- Usage guidance: examples showing how the agent should use predefined helpers and variables in Spark notebooks.
Practical benefits

- Reference-first tasks
After the initial base is set, you can define tasks by reference.
Example:
“Use daily_bike_usage between 2022-10-01 and 2024-09-30 from table tb_daily_rentals as the target value in the training dataset.”
Result: PySpark with the real SQL statements that runs as-is. - Documentation by doing
Task definitions double as living documentation you will actually revisit and refine. - ⚡Schema Doc Hack
After saving a new table to the Lakehouse, run a small function to emit minimal table metadata (name, columns, datatypes) as Markdown. Then have the agent add precise column descriptions based on the current conversation. This produces documentation that is immediately useful because the agent has full context. - Compounding context
Paste the new table description into the right spot in your project files. Your semantic layer grows in value with every iteration.
Working rhythm
From time to time, I ask my agents about how we work together. The feedback often surfaces simple improvements that pay off.
Question
Something completely different! We work so well together on this project. I think that’s partly because I make my requirements and the results we’ve worked out together available in the project as Markdown and keep them up to date in the Project Instructions.
How do you see our collaboration? Is there room for improvement?
Answer of the Agent
Hey Jeff! Thanks for asking – our collaboration is working well.What works
- Your living Markdown in Project Instructions acts like persistent shared memory.
- You maintain healthy skepticism, verify results, and catch pitfalls early.
- You keep tasks crisp, which reduces back-and-forth and accelerates code that runs.
Ideas to improve
- Decision checkpoints: before implementation, I can present 2-3 options with trade-offs and a recommendation.
- Context handoff: when you return, a short “where we left off – what’s next – open risks” note helps align quickly.
- Experiment log: a lightweight Markdown page for model variants – Tried X, learned Y, next Z – linked to MLflow runs.
What do you think? Any friction points you want to change?
Takeaways
With a stable system prompt and evolving project files, you get a practical semantic layer. Keep writing the project as you build it – the agent will use yesterday’s context to make today’s work faster and safer.
My Track Record
Over nearly 30 years, I’ve been building exactly these bridges between business, processes, and data:
- At A1 Telekom Austria, I designed and implemented the Enterprise Information Architecture, later leading the development of the central Big Data Platform and Data Lake.
- I created frameworks for data ingestion, integration, and governance, replacing legacy systems with modern, scalable architectures.
- I’ve worked on data modeling, data warehouses, and BI systems that supported planning, reporting, and decision-making across the company.
- Today, I’m expanding my expertise into Machine Learning, Generative AI, and platforms like Databricks and Azure, making sure my architectural work fully enables next-generation AI solutions.
My passion has always been the same: turning data into clarity and business value.