Štěpán Rešl
@stepanresl.bsky.social
Lead technical consultant @ DataBrothers
Data Platform #MicrosoftMVP ~ #PowerBI & #MicrosoftFabric 🏳️🌈
🌐 Links:
Blog: www.datameerkat.com
GitHub: https://github.com/tirnovar
Data Platform #MicrosoftMVP ~ #PowerBI & #MicrosoftFabric 🏳️🌈
🌐 Links:
Blog: www.datameerkat.com
GitHub: https://github.com/tirnovar
Sometimes you reach a point where it just doesn't work. When nothing that seemed right to you and that you enjoyed doing suddenly makes no sense, and you don't understand it yourself.
September 30, 2025 at 7:42 AM
Sometimes you reach a point where it just doesn't work. When nothing that seemed right to you and that you enjoyed doing suddenly makes no sense, and you don't understand it yourself.
Yeaaah! Let's create an abstract. Aaaand nope! Let's submit a session to the conference that I know and love, where I've had the opportunity to speak... Neither is that something that I can do.
September 30, 2025 at 7:40 AM
Yeaaah! Let's create an abstract. Aaaand nope! Let's submit a session to the conference that I know and love, where I've had the opportunity to speak... Neither is that something that I can do.
I have the same issues with sessions and their submission. I have two particular topics that I am confessing to myself that I want to talk about.
September 30, 2025 at 7:40 AM
I have the same issues with sessions and their submission. I have two particular topics that I am confessing to myself that I want to talk about.
It was calling for one of those functions. Still, it doesn't matter if you call for a custom function in the scale like this. You can go for EVALUATE {1} and still be receiving similar delay.
September 22, 2025 at 2:21 PM
It was calling for one of those functions. Still, it doesn't matter if you call for a custom function in the scale like this. You can go for EVALUATE {1} and still be receiving similar delay.
It is. I have a concern about how people will import functions in the same way, importing data columns... just in case, and then never use them.
September 22, 2025 at 1:50 PM
It is. I have a concern about how people will import functions in the same way, importing data columns... just in case, and then never use them.
Executing Python notebooks with a large number of cells? Nope! After 20 cells, the notebook stops executing the rest of them and gets stuck in an infinite loop of waiting.
Isn't this beautiful? I think it is. You need to cover every possible scenario on your own.
Isn't this beautiful? I think it is. You need to cover every possible scenario on your own.
September 15, 2025 at 4:46 PM
Executing Python notebooks with a large number of cells? Nope! After 20 cells, the notebook stops executing the rest of them and gets stuck in an infinite loop of waiting.
Isn't this beautiful? I think it is. You need to cover every possible scenario on your own.
Isn't this beautiful? I think it is. You need to cover every possible scenario on your own.
You just need to know a guy who knows a guy
September 8, 2025 at 8:53 AM
You just need to know a guy who knows a guy
Ahh… I am not using this one. This one was not reliable based on my tests.
September 8, 2025 at 8:31 AM
Ahh… I am not using this one. This one was not reliable based on my tests.
So, after writing before you read it, you need to set a sleep time. Also, could you tell me what endpoint you are using for triggering synchronization?
September 4, 2025 at 9:26 AM
So, after writing before you read it, you need to set a sleep time. Also, could you tell me what endpoint you are using for triggering synchronization?
Depends on how you read it. Are you reading files of the Delta Table directly, or are you reading them by T-SQL endpoint? If it is by a T-SQL endpoint, then you need to trigger synchronization after every write into the lakehouse.
September 4, 2025 at 9:09 AM
Depends on how you read it. Are you reading files of the Delta Table directly, or are you reading them by T-SQL endpoint? If it is by a T-SQL endpoint, then you need to trigger synchronization after every write into the lakehouse.
... and they can detect the synchronization process and wait until the schema is updated (or at least, that's what I have seen... I haven't checked it with anyone on the MSFT side).
Plus... what endpoint are you using for it? There are two, and one of them is highly unstable.
Plus... what endpoint are you using for it? There are two, and one of them is highly unstable.
September 4, 2025 at 6:58 AM
... and they can detect the synchronization process and wait until the schema is updated (or at least, that's what I have seen... I haven't checked it with anyone on the MSFT side).
Plus... what endpoint are you using for it? There are two, and one of them is highly unstable.
Plus... what endpoint are you using for it? There are two, and one of them is highly unstable.
Refreshing, or perhaps better described as synchronization of the T-SQL endpoint, requires time. Currently, I am using time.sleep(300) but to be honest... I am not using that many direct connections to the T-SQL endpoint, besides semantic models...
September 4, 2025 at 6:57 AM
Refreshing, or perhaps better described as synchronization of the T-SQL endpoint, requires time. Currently, I am using time.sleep(300) but to be honest... I am not using that many direct connections to the T-SQL endpoint, besides semantic models...
That's a thread that we can start easily
August 6, 2025 at 6:27 PM
That's a thread that we can start easily
Thank you all for your responses. Overall, I feel like I'm losing connection with many MVP colleagues and sometimes even the community because of this.
August 6, 2025 at 5:15 PM
Thank you all for your responses. Overall, I feel like I'm losing connection with many MVP colleagues and sometimes even the community because of this.
270 parquet files with different schemas that all need to be transformed at once. Even just loading them into PySpark requires 17M because it needs to load these files one by one with schema transformation. Because the differences are too vast for Gluten...
August 3, 2025 at 3:03 PM
270 parquet files with different schemas that all need to be transformed at once. Even just loading them into PySpark requires 17M because it needs to load these files one by one with schema transformation. Because the differences are too vast for Gluten...
In discussions at SQLBits, I have heard many times that you can't connect notebooks to these KeyVaults... So I needed to find the way... Nice... now I need to write an article about it. This is not exactly what you would expect...
July 30, 2025 at 8:06 AM
In discussions at SQLBits, I have heard many times that you can't connect notebooks to these KeyVaults... So I needed to find the way... Nice... now I need to write an article about it. This is not exactly what you would expect...