In
Maximum MySQL Database Size? Nick Duncan wants to find out what the maximum size of his MySQL database can possibly be. He answers that with a list of maximum file sizes per file system type. That is not a useful answer.
While every file system does have a maximum file size, this limitation is usually not relevant when it comes to MySQL maximum database size. But let's start with file systems, anyway.
First: You
never want to run a database system on a FAT filesystem, ever. In FAT, a file is a linked list of blocks in the FAT. That is, certain "seek" (backwards seek operations) operations become slower the larger a file is, because the file system has to position the file pointer by traversing the linked list of blocks in the FAT. Since seek operations are basically what a large database does all day, FAT is completely useless for this. So the actual file size limit of FAT is kind of moot for the purpose of this discussion.
Second: You also
never want to run a database system on a 32 bit operating system. Not only does that limit your file size in Windows and also in certain ways in Linux, it will also limit the amount of system memory you can invest into MySQL buffer caches. That's kind of useless, because it is memory that makes databases fast.
In 64 bit systems and with modern file systems (NTFS in Windows and XFS on LVM2 in Linux, on a recent kernel), the operating imposed file size limit is multiple terabytes or petabytes, even. We will soon see that the exact number is not really relevant.
Continue reading "How large can a MySQL database become?"
Where I work, Merlin is an important tool for us and provides a lot of insight that other, more generic monitoring tools do not provide. We love it, and in fact love it such much that we have about 140 database agents reporting into Merlin 2.0 from about 120 different machines. That results in a data influx of about 1.2G a day without using QUAN, and in a data influx of about 6G a day using QUAN on a set of selected machines.
It completely overwhelms the Merlin data purge process, so the merlin database grows out of bounds, which is quite unfortunate because our disk space is in fact very bounded.
The immediate answer to our purge problem was to disable the merlin internal purge and with the kind help of MySQL support to create a script which generates a list of record ids to delete. These ids end up in a number of delete statements with very large WHERE ... IN (...) clauses that do the actual delete.
This is a band-aid fix, which does work in a way, but also has unintended consequences, though. Or, as we use to say around here: 'That also breaks, but in a different and interesting way.'
Continue reading "DELETE, innodb_max_purge_lag and a case for PARTITIONS"
According to my findings in
Bug #31876, MySQL does not commit data to disk in Windows using the same method MS SQL Server and DB/2 are using. The method MySQL uses appears to be seven times slower in pathological scenarios.
The bug report contains a patch - thanks to the MySQL WTF (The Windows Task Force) and the lab provided by the customer for helping me to find that.
Does this work for you? I want to hear about your test results.
Lately, I have had opportunity to evaluate a very large Ruby installation that also was growing very quickly. A lot of the work performed on site has been specific to the site, but other observations are true for the platform no matter what is being done on it. This article is about Ruby On Rails and its interaction with MySQL in general.
Continue reading "Rubyisms"
This article does not even contain the words database or MySQL. I still believe it is somewhat interesting.
Mail has, for some reason, always been playing a big role in my life. I have been running mail for two, my girlfriend and me, in 1988. I have been running mail for 20 and 200 people in 1992, setting up a citizens network. Later I designed and built mail systems for 2 000 and 20 000 person corporations, and planned mail server clusters for 200 000 and 2 million users. And just before I became a consultant at MySQL I was working for a shop that did mail for a living for 20 million users.
Mail is a very simple and well defined collection of services. You accept incoming messages to local users, you implement relaying for your local users with POP-before-SMTP and SMTP AUTH, you build POP, IMAP and webmail accesses, and you deploy spam filter systems and virus scanners for incoming and outgoing messages. This services collection does hardly change when you go from 2 to 20 million users – maybe the larger systems will also provide additional services such as portal services, a news server or other more directed stuff, but that is just fluff outside of the scope of the mail system. The solutions, though, are very different, and very much dependent on the scale of your operations.
Continue reading "Changing everything"
In
Semi-Dynamic Data, Sheeri writes about Semi-Dynamic Data and content pregeneration. In her article, she suggests that for rarely changing data it is often adviseable to precompute the result pages and store them as static content. Sheeri is right:
Nothing beats static content, not for speed and neither for reliability. But pregenerated pages can be a waste of system ressources when the number of possible pages is very large, or if most of the pregenerated pages are never hit.
An intermediate scenario may be a statification system and some clever caching logic.
Statification is the process of putting your content generation code into a 404 page handler and have that handler generate requested content. The idea is that on a second request the content will be there and thus a static file is being served with a 200 OK, using the fast path of the web server.
A typical example for this kind of task would be a script that generates its name as a PNG: By requesting
http://vvv.k6p.de/statify/example.png, we are returning a PNG image that contains the text "example" - the number of potential PNGs is infinite, and we cannot possibly precalculate all of them. But a few PNGs are requested over and over - most likely because we are referencing them via some IMG tags in some HTML pages. It would be pointless to generate them again and again for each requester, because these images never change.
Here is how to do it, using PHP:
Continue reading "Statification"
This is a translation of a
german language article I wrote two weeks ago for my german language blog.
In 2004, when I was still working for
web.de, I gave a little talk on
Scaleout on Linuxtag. Even back then
one major message of the talk was "Every read problem is a cache problem" and "Every write problem is a problem of distribution and batching":
To scale, you have to partition your application into smaller subsystems and replicate the data. The replication should be asynchronous, not using two phase commit (2pc), or the gain will be smaller to nothing. Writes must be delayed and batched, so they can be handled more efficiently. To avoid bottlenecks, data storage should be decentralized and a central database should be avoided.
That is often contradictory to what traditional database classes at university teach: It is insecure, you are working with false or outdated data and you don't even know at all time if everything worked out the way you imagined it should have.
It has an important advantage: It works. And it scales.
Continue reading "Dealing with failure - the key to scaleout"
In
Serving Images from a Database, Sheeri responded to me:
I appreciate your ideas, but you haven't read what I've written: "the sheer number of files is enough to kill any os."
I'm serious about that. we're talking 36 million images -- that's 36 million files (no thumbnails, they're all reduced size) plus directories to store them in. We have cacheing tuned very well, and it's still kills the OS.
Here is what I tried to simulate that:
I took my new Strato MR2. It is currently running tor, INN and MySQL. There are two
HDS722516VLSA80 160 GB SATA HDDs. These disks are rated at
8.8ms avg seek time, so there are 113 seeks in a second with this disk.
Continue reading "Serving Images from a File System"
Sheeri wrote:
So, most of the “I want images in MySQL” conversations are terminated with “Don’t.”
People do say this for a reason. Here is the story of your request going through the system:
Continue reading "Serving Images From A Database"
In
Is MySQL simple or complex (or both)?, Mike Kruckenberg asks:
So the question is . . . What happens to the people who really just wanted a simple, no frills, easy to use database. Based on the message, MySQL is no longer the database for them. Right?
MySQL 5.0 has gained a lot of features, and some of these features such as the procedural extensions, are as often a benefit for a project as they are creating problems for a project due to feature abuse. Having more features makes the product larger and more complex as well. So, yes, 5.0 is a lot more complex than older versions, and it carries around a lot of baggage that many traditional MySQL users don't need - and yet the same features are crucial for others.
This dilemma one reason why I look forward to the beginnings of modularization that we see with 5.1 - loadable fulltext parsers are only a beginning. Ultimately, I hope to see loadable storage engines together with loadable SQL syntax extensions (loading a storage engine also extends at least CREATE TABLE and ALTER TABLE syntax with the options needed for these extensions), and perhaps even loadable everything such as loadable stored procedure interpreters - CREATE SP LANGUAGE AS "/usr/lib/libperl.so" or similar, if you like it.
Growing in such a direction would make MySQL at the same time a simpler tool than before - in an ideal world, the core would have no features at all besides a DSO loader. At the same time it would turn MySQL into something more flexible and complex than ever before - a data storage and retrieval framework that can ultimately mix and match extensions as you need them for your project and contains only the code and functionality that you need. And if something is missing, just write it yourself. Write only what is missing and merge it with the rest within the framework.
(Disclaimer: /me is a MySQL employee, but the above is my vision of MySQL, and it may or may not differ from the company vision. :-)
(title with apologies to
Murray Gell-Mann)