Databricks Data and AI Summit (2025) – First Thoughts

👋 Hey there! This video is for members only.

Please log in or create a free account below.

Login / Sign Up

Hello and welcome! I know I’m very late to the party — it’s been about a month since the Databricks Data + AI Summit, but I’ve finally had a bit of time to catch up and pull together some of my thoughts.

This isn’t going to be a deep dive (that’ll come later). It’s more of a first impressions post on what caught my attention, what I found interesting, and where I think things could be useful. I got most of my info from what Databricks themselves released, as well as the team over at Advancing Analytics (great content from them, as usual). So let’s get into it!

Databricks Free Edition – A Massive Upgrade


Let’s start with something I think is a huge win — the Databricks Free Edition. Honestly, this feels like a proper upgrade over the old Community Edition, which, let’s be honest, was pretty limited to the point where you’d think “what’s the point?”.

But now, you can actually do quite a lot. It’s still small-scale — it runs on a serverless cluster — but it’s free, no credit card required, and perfect for people like me who just want to train, explore, and get stuck in. Definitely one to check out if you haven’t already.

More here: https://journiql.com/databricks-free-edition-launch-first-thoughts/

Databricks Apps – Still Figuring It Out


Next up: Databricks Apps. So, you can now build and deploy data and AI apps within Databricks. I’ll be honest, I’m still trying to fully wrap my head around all the use cases, but a few things stood out. For example:

  • You can embed data visualisations into Power BI reports.
  • There’s support for RAG chat apps powered by Genie.
  • You’ve got custom config options for LakeFlow data entry forms.

It’s one of those features that could be amazing depending on the use case. I think I just need to play around with it more and understand the cost implications too. One to explore further.

Databricks One – ChatGPT, But for Your Data


Now this one’s really interesting — Databricks One. For me, the easiest way to describe it is: imagine ChatGPT, but inside Databricks. You ask for data in plain English and it gives it to you — within the limits of what you’re allowed to access, of course.

It looks especially handy for non-technical users who want to build quick reports and dashboards without needing to dive into code. It’s currently in beta and supposed to be fully released later this summer (though they haven’t given a date).

The one thing I’d love to see: make it available outside the Databricks workspace. Like, if we could get a standalone app or even just a URL that people could access without having to go into Databricks — that would be really cool.

Unity Catalog Metrics – Some Potential Here


Another feature they talked about is Unity Catalog Metrics. From what I understand, you can create metrics that are reusable across the platform. This connects across streams and gives you consistent definitions across dashboards and tools.

I’ll be honest — I need to do more research on this one. It sounds like it could be useful, especially in larger teams where consistency is important. But I’m not 100% certain on how it all works yet, so I’ll leave it there for now.

LakeFlow Connect – Big Win for Data Ingestion


Now this one I am excited about — LakeFlow Connect. It makes bringing data in from multiple sources really easy. Some of the main ones they showed were:

  • Salesforce
  • SQL Server
  • Google Analytics
  • Workday

But there’s probably more. This is going to help a lot when it comes to getting data in quickly and reliably. It also solves a lot of those annoying CDC (Change Data Capture) issues. In the past, we had to do these awkward workarounds — like enabling CDC on the source, generating tables, and so on. But now, it looks like it handles all of that for you.

It also supports managed ingestion from apps, databases, file systems, and even real-time streams. And all of it’s done through the UI — no code needed. Really impressive.

LakeFlow Declarative Pipelines – Formerly DLT, Now Even Better


This used to be called Delta Live Tables (DLT), but they’ve now rebranded and added more features under the name LakeFlow Declarative Pipelines.

Basically, this is a more powerful way to automate your data ingestion and transformation. You can build pipelines that clean your data, handle retries, checkpoints, and move you closer to that gold or platinum data layer.

It’s not just a name change either — they’re really investing in making this more robust. If you’ve already got DLTs in place, it looks like this builds on top of that with a lot of automation. Very excited to try this out.

LakeFlow Designer – Drag and Drop ETL


LakeFlow Designer is another new feature that looks very promising. It lets you build ETL pipelines using drag-and-drop, so you don’t need to write any code.

It’s not released yet (they’ve said “in the coming months”), but this could be a game-changer for teams that don’t have coding expertise but still need to build reliable data pipelines.

How powerful it’ll actually be — we’ll see when it’s out. But on paper, it sounds great.

LakeBridge and LakeBase – Migration Tools


Lastly, we’ve got LakeBridge and LakeBase. I didn’t dig too deeply into these, but from what I gathered:

  • LakeBridge helps you migrate workloads into Databricks SQL.
  • LakeBase seems to be more Postgres-focused (which I don’t personally use much).

If you’re using Postgres or looking to move workloads into Databricks, these could be worth checking out. But for me, I’m not rushing into these just yet.

Final Thoughts


So overall, what stood out most for me?

  • The Databricks Free Edition is a huge upgrade and great for learning.
  • LakeFlow Connect is going to make integrating data from different sources so much easier.
  • LakeFlow Declarative Pipelines is a big one — especially with the automation they’re adding.
  • The Apps and Databricks One features could be really useful — just depends on the use case and how easy they are to implement.

There’s definitely more I want to test out and do a deeper dive on, so keep an eye on the channel (and the blog) for that.

Thanks for reading — and if you’ve got any thoughts or corrections, feel free to drop them in the comments. I’ll see you in the next one!

Leave a Comment

Your email address will not be published. Required fields are marked *