How To: Creating a Persistent Log Buffer in SQL 2019 | Data Exposed

How To: Creating a Persistent Log Buffer in SQL 2019 | Data Exposed


>>SQL Server 2019 provides a new feature for Linux called
Persistent Log Buffers. It was available for Windows before, nowadays on Linux as well, and it helps you to eliminate
bottlenecks that might occur when waiting for a log
buffer two flush to disc. Brian is here to tell us all
about it today on Data Exposed. [MUSIC]>>Hi, and welcome to another
episode of Data Exposed. I’m your host Jeroen, and
today I have Brian with me to talk about Persistent
Log Buffers in SQL 2019. So hi, Brian, welcome to the show.>>Hi, Jeroen. Thank you.>>So what are we going to talk
about Persistent Long Buffers?>>Yes. So->>What is that?>>So Persistent Log
Buffer is one of what we call the In-Memory
Database feature family, which includes In-Memory OLTP, Persistent Log Buffer which
I’ll demonstrate today, sometimes called Tail of Log Caching, a Data and Log File
Enlightenment in Linux, Hybrid Buffer Pool in
Linux and Windows, and Memory-Optimize TempDB Metadata.>>Okay. Cool.>>So I’ll just mention quickly
about persistent memory devices. A lot of people haven’t
seen them but essentially these are regular DIMMs that you feed into your server that
come in different capacities. MVDIMM-N which is one type of
persistent memory technology, comes an 8, 16, or 32
gig DIMM capacity, and then the latest Intel obtained DC Persistent memory comes in
much higher capacities of a 128, 256 gigabytes, or 512 gigabyte DIMMs.>>That’s all of them
persistent memory. Wow.>>Yes. So you can, on a nate socket server, you can support up to 24
terabytes of persistent memory.>>I can unlock all of that with this persistent
log buffer, right?>>Correct.>>Wow.>>Persistent Log
Buffer is designed to solve a particular use case where you were incurring slowdowns
or waits in your workload, waiting for the log buffer that
is in memory to flush to disk.>>Okay.>>So it uses the
persistent memory device on it knows that once it’s
written to that device, that it doesn’t need
to wait for the flush because it’s already on
a persistent device.>>Then the device will
take care of the rest.>>Yes, the device will
then take care of the rest while you carry on essentially
with your workload.>>Yeah.>>So when you’re setting up
these devices in Windows, we have some basic recommendations
that you lock pages in memory, you use the two megabyte
allocation unit size for NTFS which won’t be to default.>>Okay.>>Also you need to
set this flag DAX. So DAX is really what enables us to treat a persistent memory
device and write to it directly skipping all of the kernel stack that you would typically need
when dealing with files. Won’t be available in the GUI, so you will need to use
some PowerShell for this.>>Okay. All right. You will
show us how this works, right?>>Yes. I will show how
these get configured. Also some of your OS level
disc counters that you may be used to looking at like
these transfers and so forth, may not be available to you when you’re working with
persistent memory devices. That’s just one of the things
you need to be aware of.>>Sure.>>These are new devices and this
is very brand-new exciting tack.>>Okay.>>So there may be some catching
up to do on the monitoring side.>>Sure.>>For Linux, non-volatile
device control is the utility that you
use to configure this. You will set it to fsdax mode, use two megabyte huge page faults, set your block allocation
also to two megabytes. We support XFS or EXT for these are two supported
file systems with DAX.>>Okay.>>So Persistent Log Buffer, this has been available
actually in SQL since SQL 2016 only for Windows until now. With SQL 2019, we’ll also have this feature now available
in Linux as well as Windows. Uses only a very small
amount of capacity, the log buffer is only 20
megabytes per user database.>>Okay.>>So it really doesn’t take
up a huge amount of capacity, and the behavior that you get is very similar to forcing
delayed durability.>>Okay.>>So again, you’re not waiting for that Log flush to happen to disk but encouraged none of the risks that you take what Forced Delayed
Durability around data loss.>>So can you tell us a
little bit more about Forced Delayed Durability
for those that are->>Sure, for those->>-not aware of it?>>Yes. For those who
are not familiar, this is essentially an asynchronous commit
mechanism in SQL Server.>>Okay.>>So there are a couple
of ways to do it. One is allowed, in which case your normal commits
happen as you expect, you wait for the flush, wait for them to be hardened on disc, or in a forced mode where all
commits behave like this.>>Okay.>>So what allowed in
you specify on a per commit basis if you want this
behavior and that’s allowed, disallowed which is the default doesn’t matter what you have in
there it’s not going to happen.>>Sure.>>Then forced all
commits behaves this way.>>Okay. So in a persistent
low level is very similar but not entirely the same.>>Very similar but
not entirely the same, because we have the
persistent memory device, we put our log buffer on there, and once we write there we know
that it’s persisted and we don’t have any risk of data loss
in the event of a server crash, power failure, anything
of that nature, we can recover from the data on
the persistent memory device.>>Okay. Cool.>>It’s actually quite simple. A lot of people don’t realize, you simply add a log file of 20 megabytes on the
persistent memory device, SQL Server will
recognize this device, and will treat it as the log buffer.>>It’s very simple>>Really that simple.>>Wow.>>Yeah, and as we can see
here are log buffer sitting on our storage class memory
which is PMM sometimes we call it storage class
memory and in some places but same thing and our
log records are there, and as I mentioned, we don’t have to wait
for them through flushed to the main
transaction log file.>>Cool.>>So I’ll just switch
quickly to my demo here.>>Yeah.>>First I’ll just show
that we have configured here our persistent memory devices. As I mentioned, these
are regular DIMMs, you can see the Device IDs there. We’ve configured two
devices one per NUMA node.>>Okay.>>Interleaved across the devices
at DIMMs on that NUMA node. So this is the recommended
way that we say to set it up.>>Okay.>>Again, we can see that our DAX value is enabled
it set to true here, and if we want to use our older
command line type utility, we can just get that little
bit more info here and we can see that we have set the allocation
unit size to two megabytes.>>As you just described.
It should be- yeah.>>Yeah. As I’ve just
described and quite simple we just add the
log-file, as I mentioned, and we just create and regardless of what size you
put it in here we’ll actually integrated to use 20 megabytes but just go ahead and
say 20 megabytes sits.>>Yeah. Just to make sure.>>Yeah, and it’s really that simple.>>Wow. All right. So that’s impressive. So basically I can unlock
all these new tech with a Persistent Log Buffer by just running very simple command, right?>>Yeah.>>Sure. You have to
configure the device first, and then after that’s done
in SQL you just add a log.>>Yeah, and this type of technology is really
enabling a new tier of storage helping remove some of the traditional
bottlenecks that we see in SQL Server on high-end workloads.>>Right. So big innovation but then done in a very simple manner for the user and for
the configuration.>>Yes. We build intelligence
into SQL Server to recognize these devices
and behave accordingly.>>Yeah. Very cool. Well,
thanks for sharing.>>Thank you.>>I think this was very useful, very interesting, at least to me. I hope this was useful and
interesting to you as well. Please subscribe, like,
comment on the video, and I hope to see you next time on another episode of
Data Exposed. Thanks. [MUSIC]

1 thought on “How To: Creating a Persistent Log Buffer in SQL 2019 | Data Exposed”

Leave a Reply

Your email address will not be published. Required fields are marked *