What You Need to Know About the MariaDB & Percona Forks of MySQL

Sometimes it feels like you’re bound to hit MySQL eventually, regardless of where you dig on the Internet. It lies beneath the biggest sites, such as Lucent, Facebook, Drupal, WordPress, Google, Joomla and countless other major frameworks, and it’s so huge it has its own letter in the LAMP stack (Linux, Apache, MySQL, Perl or PHP or Python). From that standpoint, it comes as no surprise that this open source database has branched off into a series of specialized forks. The two most prominent are MariaDB and Percona.

This is a deeper look into how and why they come about, along with some advice on what to consider before diving into either fork Step one is nailing down a bit of a background on MySQL itself.

A Brief History of MySQL

In the early 1990’s, just as the World Wide Web was being introduced to the world, a network monitoring service named Minerva created an affordable, stripped-down version of SQL called MiniSQL or just mSQL. It wasn’t open source, but it was free for non-commercial use and sported a price tag in the hundreds for businesses. It appeared just as web standards were being cobbled together and mini-servers were taking off, so it became a backbone of the emerging web infrastructure.

A few years later, Allan Larsson, David Axmark and Michael “Monty” Widenius adapted mSQL into a more robust interface they named MySQL. The legend says it was named after Widenius’ daughter My.

When Sun bought MySQL in 2008, the development community became suspicious of the new owners and the forks began in earnest.

The Evolution of MariaDB

Widenius and Axmark left MySQL shortly after the buy out, and Widenius made very public criticisms of Sun and Oracle as acting contrary to the spirit of open source. Along with 20 of MySQL’s former top developers, he began a “Save MySQL” campaign.

Oracle bought Sun shortly afterward. For Widenius, that moment marked the birth of MariaDB (maintaining the theme by naming the product after another one of his daughters Maria), an open source alternative to MySQL that would be developed in concert with the community; it was meant to stay free perpetually under the GNU license.

The team was able to secure WikiMedia to make the switch to MariaDB in support of open source. As a result, MariaDB had added 10 million users by the end that year. To make switching easier for organizations of all sizes, MariaDB has been built to be a simple “drop-in” replacement for MySQL, coordinated to match MySQL’s APIs. For storage, it replaced the InnoDB engine with Aria and Percona’s XtraDB.

Percona Enters the Scene

Peter Zaitsev and Vadim Tkachenko introduced the open source Percona in 2006. The two had spent a great deal of time supporting MySQL and saw what the businesses really wanted from MySQL; they incorporated those aspects and made it more affordable, which is why some people consider Percona more of a branch of MySQL than a full fork like MariaDB.

Percona is perhaps best known for taking the InnoDB and retooling it into XtraDB, which lies at the root of Percona’s major code enhancements. In other aspects, Percona hews closer to the developments in MySQL than MariaDB, since its concerns are related to improving server performance. For example, Percona pulls in scalability, availability, security and backup features from the Enterprise version of MySQL.

A Few of the Other Forks

  • Drizzle: This open source fork originates from the abandoned MySQL 6.0, and is intended for the web infrastructure and cloud community. The main idea is that it uses a microkernel architecture, using plugins like query cache and authentication.
  • OurDelta: This fork was designed to enhance feedback cycles and bring MySQL code up to date with innovation on the web. OurDelta made it easier to test in more environments, although it no longer exists as a separate project as these enhancements were folded back into the MariaDB 5.1 release.
  • WebScaleSQL: This is another fork that some consider a branch of MySQL 5.6. It was built in 2014 in association with developers from Facebook, Google, LinkedIn and Twitter, and it was made to deal with some of the most common problems with massively replicated databases operating on server farms. WebScaleSQL’s goal was for the deduplication of efforts built independently by social media companies, and it is now saved on GitHub under the GNU license.

Percona or MariaDB: Which Is Better?

Of course, there’s no easy answer to that question. MariaDB (v. 10.1) and Percona Server (v. 5.6) are both production ready, and both can be dropped in fairly easily as a replacement to MySQL v. 5.7, which is the latest production-ready release.

Many companies won’t see a difference in performance unless there are specific capabilities they need or bottlenecks they want to overcome. Here are some reasons, other than the fact both are open source, why you might consider one over the other.

Five Reasons to Consider MariaDB

In addition to WikiMedia, companies supporting MariaDB include most Linux offshoots like Red Hat and SUSE; in 2013, Google announced it was joining those companies in moving its largest servers to MariaDB. Look at MariaDB if you:

  • Want to take advantage of an active international community of developers, not just those from Oracle.
  • Need to stay up to date with the latest developments in technology.
  • Have stakeholders that require immediate security patches.
  • Want to get started on upgrade features before they are released in MySQL.
  • Expect alternate storage engines to be built into the code, such as Connect and Cassandra for NoSQL backends, Spider with built-in  sharding or Percona’s TokuDB to deal with fractal indexes.

Five Reasons to Consider Percona

Percona estimates there are around 2 million installations, even at companies like Acquia, HP, Flickr, Etsy, Opera and Tumblr. Look at Percona if you:

  • Want queries to come back with results faster. Percona have released several benchmarks favourably comparing their XtraDB storage engine against InnoDB in vanilla MySQL.
  • Need to see better consistency in running a variety of very powerful servers.
  • Have plenty of I/O and large working sets that should be handled with no sharding.
  • Want to reduce maintenance time. Percona includes enhanced utilities for online backup and table import/export.
  • Tunability. Percona includes additional instrumentation within the MySQL internals to help with monitoring and tuning.

Concerns Before Switching

Your main concerns in making this type of commitment should be operational. Make a list of what information you’ll need to gather, beginning with:

  • Possible migration paths.
  • Key configuration tuning variables.
  • How to reload tables, if that becomes necessary.
  • Whether you need to reconfigure a stand-alone InnoDB server or use something like Percona’s XtraDB.
  • Any documentation on how your applications integrate with the new cluster.
  • What will be necessary to avoid downtime in the migration.
  • How the change can be accomplished gradually.

The MariaDB site notes three points to consider before you upgrade:

  • Views with definition ALGORITHM=MERGE or ALGORITHM=TEMPTABLE were accidentally swapped between MariaDB and MySQL. You will have to re-create views created with either of these definitions.
  • MariaDB has LGPL versions of the C connector and Java Client. If you are shipping an application that supports MariaDB or MySQL, you should consider using these.
  • Consider trying out the TokuDB storage engine or some of the other new storage engines that MariaDB provides.

You’ll find there are many considerations before installing Percona, based on the source of the code and whether you are installing over a recent version of MySQL or MariaDB. Note that Percona offers repositories for yum (RPM packages for Red Hat, CentOS and Amazon Linux AMI) as well as for apt (.deb packages for Ubuntu and Debian).

Final Thoughts

Remember that MariaDB was developed to add more features and to optimize queries; it offers scalability features including multi-source replication, allowing a single server to replicate from several sources. Be careful about planning out any complex replication schemas that bridge MySQL with other implementations. You will be able to replicate from MySQL v. 5.6 to MariaDB v.10, but not the other way around.

Percona has devoted more of its resources to improving the database and how it manages network interactions. Users come to Percona looking for enhanced availability in addition to high throughput and  scale. As Percona stays closer to current MySQL releases, it’s a more conservative option in terms of application compatibility; IT professionals looking for detailed diagnostics and performance metrics tend to prefer Percona.

For the TL,DR summary: MariaDB is better for advanced features, the latest security and adapting to new tech. Percona excels at database performance and diagnostics. Either one will put you in good company with cutting edge technology firms around the world.

6 Basic Security Concerns for SQL Databases – Part 2

In our first article, we looked at some of the basics of securing databases, examined several examples of major data breaches, and reviewed best practices companies use to prevent hackers from accessing sensitive information. In this article, we are going to look at some advanced database security issues and recommend steps you can take to address them.

With the explosion of big data and cloud computing in recent years, database security is more critical than ever. Sites like Facebook use massive, unstructured databases managing millions of data points to handle their enormous user base of more than one billion. Clearly, these types of computing environments provide security challenges.

Advanced Database Security Considerations

According to Imperva, a California-based cyber-security company, there are several major security threats to organizational databases. They include:

  • Excessive privileges or unused privileges

  • Abuse of privileges

  • Input injection (formerly SQL injection)

  • Malware

  • Poor audit trails

  • Exposure of critical storage media

  • Vulnerable and misconfigured databases

  • Insecure sensitive data

  • Distributed denial of service attacks (DDoS)

  • Low levels of security education and knowledge of proper procedures

Tackling Big Data Database Security

These challenges are magnified in advanced databases. Although similarities exist between traditional data security and big data security, the differences include:

  • The amount of data collected and analyzed in big data applications. The sheer variety and volume of big data increase exponentially the challenge of maintaining security. Data repositories are sprinkled across the enterprise, and every source has its own permission levels and security detail. Research, governance, compliance and other data may be in different data sets. The data transfer rates and workflows might be different for each data source. Each of these variables presents another potential attack point for hackers.

  • The technology used for both unstructured and structured big data. One of the major challenges of securing modern databases is that database tools such as Hadoop never had much security baked into them in the first place. By their very nature, they create vulnerabilities that are less prevalent in traditional databases.

  • How big data is stored. Picture a single database server environment in comparison to the distributed environment found in big data applications. By design, these databases can spread out across a number of data environments and server clusters in multiple locations. The distributed infrastructure increases the potential for attacks.

Recommended Security Controls

To meet these challenges, the SANS Institute, an organization focused on security research and education, has developed a list of recommended security controls that increase cyber-defense for advanced database configurations. They include:

  • Account monitoring. Eliminate any inactive accounts, require users to implement strong passwords, and establish maximums for failed login attempts. Close control of database access brings down the chance of a hacker doing damage from the inside.

  • Application security. Implement secure editions of open source software such as Apache Accumulo.

  • Inventory of devices. Monitor every hardware device on your network so that any unauthorized device can be quickly located and blocked from gaining access.

  • Inventory of software. Similar to device inventory, every application that accesses the network must be authorized. Block installation or execution of unauthorized and unapproved software.

  • Procedures and tools. Rather than building security guidelines from scratch each time you add an application or new piece of software, develop checklists, benchmarks, and guidelines that apply to every application. Two things that can help you get started are the Center for Internet Security Benchmarks Program and the NIST National Checklist Program.

  • Vulnerability assessment. On an ongoing basis, assess and evaluate new information and knowledge to identify potential vulnerabilities in your database, and implement procedures to minimize damage. Remember that hackers are on constant attack and are always trying to take advantage of new knowledge in the marketplace.

  • Protect browsers and email. Browsers and email software are popular access vectors for hackers to try to reach your system. Maintaining solid email and browser security minimizes the attacks on your database through these channels.

High-Profile Data Breaches

As powerful as modern technology is, sometimes it is hard to believe that computers that process millions of points of data on a daily basis can be crippled so easily. Yet, data breaches on a massive scale are regular items in the news. Here are some examples:

  • Ashley Madison. A group of actors dubbed “The Impact Team” announced they would release Ashley Madison customer information if it did not shut down operations. A site that facilitated extramarital affairs, Ashley Madison’s owners apparently did not believe they were vulnerable and took no action against the threat. However, The Impact Team made good on its promise, and in July 2015, released more than 37 million customer records, including names and passwords. The result was devastating and ongoing, as many customers continue to deal with the fallout of their names being released.

  • Internal Revenue Service. Hackers compromised the computer systems of the Internal Revenue Service and manipulated tax records for more than 300,000 taxpayers. Using stolen credentials, they garnered millions of dollars in bogus refunds. They were only discovered when the IRS noticed an inordinate number of requests for all the tax returns.

  • CareFirst/BlueCross BlueShield. Health records are some of the most personal pieces of data stored in corporate databases. Yet, the health industry continues to experience significant data breaches. In May 2015, CareFirst determined that hackers have gained access to more than one million members’ names, email addresses and birth dates. One good note: The thieves did not get to their employment information, social security numbers or financial data because the passwords were encrypted.

  • Kaspersky Lab. Is it possible for a security vendor to experience a significant cyber attack? The answer is yes because, in June 2015, Moscow-based security company Kaspersky Lab was infiltrated by hackers. They were able to compromise data on the company’s products that deal with fraud prevention and secure networks.

  • Harvard University. A July 2015 compromise of the security systems at Harvard University was the latest in a string of other breaches at institutes of higher learning across the nation. Although experts are not sure what data the hackers gained, the news of the infiltration was similar to other attacks at institutions, such as a Penn State University breach in the spring. That strike affected the records of more than 18,000 people.

Working With Cloud Providers to Ensure Security

If you are working with cloud vendors, is your data safe? Database security for enterprise computing and the cloud are much the same as non-cloud databases — data breaches have been happening at an alarming pace in both environments. However, placing data in the cloud means it is not on the same site as your organization, which adds another dimension of risk.

One of the sales points cloud providers extol is that they have specialists who are experts in their fields and so have advanced knowledge that your organization may not possess. While that may have validity, it may not be true in all cases.

In addition, using many people means more of the human element, always a greater risk to data security than any other factor. Even though cloud computing presents an idyllic world of data being secured somewhere “up there,” in truth, it is located in a data center much like the one at your site.

You should be asking cloud providers question such as:

  • Where is our data stored?

  • Who manages it?

  • Is it always stored in the same place, or is it moved around to different countries?

  • Do any outside personnel have access to my information?

  • Do you encrypt my data, and if so, how do you do it?

  • Other than your firm, what other organizations have green-light permission levels to the encryption key?

Database security for both traditional bases and the high-speed, high-volume distributed databases of big data and cloud computing are similar. However, the significant size, speed and complexity of databases managing huge amounts of information mean they are also open to a bigger attack surface, more points of vulnerability and increased physical environment concerns.

Effective Security Implementation

The best practices, strategies and tactics for effective security implementation remain the same for both environments: Keep track of hardware devices on the system, closely monitor all applications on the network, come up with solid guidelines and benchmarks that you apply to every program, consistently evaluate potential vulnerabilities in your system and come up with a plan of remediation, and constantly encourage end users and company personnel to maintain good security habits.

In Summary

This wraps up the second article in our two-part database security series. In the first article, we looked at basic database security procedures that can be implemented by database administrators, especially those who may be new to the position. We recommended straightforward procedures like strengthening network security, limiting access to the server, cutting out unneeded applications, applying patches immediately, encrypting sensitive data and documenting baseline configurations.

In this article, we looked at the bigger picture of advanced database security by examining today’s world of cloud computing, big data, and unstructured databases. We discovered that, while the scope and size of these environments differ greatly from a localized, traditional database, the security concerns are the same. Implement these ideas, and you will have taken major steps toward preventing a critical data breach at your organization.


6 Basic Security Concerns for SQL Databases – Part 1

Consider these scenarios: A low-level IT systems engineer spills soda, which takes down a bank of servers; a warehouse fire burns all of the patient records of a well-regarded medical firm; a government division’s entire website vanishes without a trace.

Data breaches and failures are not isolated incidents. According to the 2014 Verizon Data Breach Investigations Report, databases are one of the most critical vulnerability points in corporate data assets. Databases are targeted because their information is so valuable, and many organizations are not taking the proper steps to ensure data protection.

  • Only 5 percent of billions of dollars allocated to security products is used for security in data centers, according to a report from International Data Corporation (IDC).

  • In a July 2011 survey of employees at organizations with multiple computers connected to the Internet, almost half said they had lost or deleted data by accident.

  • According to Fortune magazine, corporate CEOs are not making data security a priority, seemingly deciding that they will handle a data problem if it actually happens.

You might think CEOs would be more concerned, even if it is just for their own survival. A 2013 data breach at Target was widely considered to be an important contributing factor to the ouster of Greg Steinhafel, then company president, CEO and chairman of the board. The Target breach affected more than 40 million debit and credit card accounts at the retailing giant. Stolen data included names of customers, their associated card numbers, security codes and expiration dates.

Although the threats to corporate database security have never been more sophisticated and organized, taking necessary steps and implementing accepted best practices will decrease the chances of a data breach, or other database security crisis, taking place at your organization.

6 Basic Security Concerns

If you are new to database administration, you may not be familiar with the basic steps you can take to improve database security. Here are the first moves you should make

  1. The physical environment. One of the most-often overlooked steps in increasing database security is locking down the physical environment. While most security threats are, in fact, at the network level, the physical environment presents opportunities for bad actors to compromise physical devices. Unhappy employees can abscond with company records, health information or credit data. To protect the physical environment, start by implementing and maintaining strict security measures that are detailed and updated on a regular basis. Severely limit access to physical devices to only a short list of employees who must have access as part of their job. Strive to educate employees and systems technicians about maintaining good security habits while operating company laptops, hard drives, and desktop computers. Lackadaisical security habits by employees can make them an easy target.

  2. Network security. Database administrators should assess any weak points in its network and how company databases connect. An updated antivirus software that runs on the network is a fundamental essential item. Also, ensure that secure firewalls are implemented on every server. Consider changing TCP/IP ports from the defaults, as the standard ports are known access points for hackers and Trojan horses.

  3. Server environment. Information in a database can appear in other areas, such as log files, depending on the nature of the operating system and database application. Because the data can appear in different areas in the server environment, you should check that every folder and file on the system is protected. Limit access as much is possible, only allowing the people who absolutely need permission to get that information. This applies to the physical machine as well. Do not provide users with elevated access when they only need lower-level permissions.

  4. Avoid over-deployment of features. Modern databases and related software have some services designed to make the database faster, more efficient and secure. At the same time, software application companies are in a very competitive field, essentially a mini arms race to provide better functionality every year. The result is that you may have deployed more services and features than you will realistically use. Review each feature that you have in place, and turn off any service that is not really needed. Doing so cuts down the number of areas or “fronts” where hackers can attack your database.

  5. Patch the system. Just like a personal computer operating system, databases must be updated on a continuing basis. Vendors constantly release patches, service packs and security updates. These are only good if you implement them right away. Here is a cautionary tale: In 2003, a computer worm called the SQL Slammer was able to penetrate tens of thousands of computer services within minutes of its release. The worm exploited a vulnerability in Microsoft’s Desktop Engines and SQL Server. A patch that fixed a weakness in the server’s buffer overflow was released the previous summer, but many companies that became infected had never patched their servers.

  6. Encrypt sensitive data. Although back-end databases might seem to be more secure than components that interface with end users, the data must still be accessed through the network, which increases its risk. Encryption cannot stop malicious hackers from attempting to access data. However, it does provide another layer of security for sensitive information such as credit card numbers.

Famous Data Breaches

Is all this overblown? Maybe stories of catastrophic database breaches are ghost stories, conjured up by senior IT managers to force implementation of inconvenient security procedures. Sadly, data breaches happen on a regular basis to small and large organizations alike. Here are some examples:

  • TJX Companies. In December 2006, TJX Companies, Inc., failed to protect its IT systems with a proper firewall. A group led by high-profile hacker Albert Gonzalez gained access to more than 90 million credit cards. He was convicted of the crime and invited to spend over 40 years in prison. Eleven other people were arrested in relation to the breach.

  • Department of Veterans Affairs. A database containing names, dates of birth, types of disability and Social Security numbers of more than 26 million veterans was stolen from an unencrypted database at the Department of Veterans Affairs. Leaders in the organization estimated that it would cost between $100 million and $500 million to cover damages resulting from the theft. This is an excellent example of human error being the softest point in the security profile. An external hard drive and laptop were stolen from the home of an analyst who worked at the department. Although the theft was reported to local police promptly, the head of the department was not notified until two weeks later. He informed federal authorities right away, but the department did not make any public statement until several days had gone by. Incredibly, an unidentified person returned the stolen data in late June 2006.

  • Sony PlayStation Network. In April 2011, more than 75 million PlayStation network accounts were compromised. The popular site was down for weeks, and industry experts estimate the company lost millions of dollars. It is still considered by many as the worst breach of a multiplayer gaming network in history. To this day, the company says it has not determined who the attacks were. The hackers were able to get the names of gamers, their email addresses, passwords, buying history, addresses and credit card numbers. Because Sony is a technology company, it was even more surprising and concerning. Consumers began to wonder: If it could happen to Sony, was their data safe at other big companies.

  • Gawker Media. Hackers breached Gawker Media, parent company of the popular gossip site Gawker.com, in December 2010. The passwords and email addresses of more than one million users of Gawker Media properties like Gawker, Gizmodo, and Lifehacker, were compromised. The company made basic security mistakes, including storing passwords in a format hackers could easily crack.

Take These Steps

In summary, basic database security is not especially difficult but requires constant vigilance and consistent effort. Here is a snapshot review:

  • Secure the physical environment.

  • Strengthen network security.

  • Limit access to the server.

  • Cut back or eliminate unneeded features.

  • Apply patches and updates immediately.

  • Encrypt sensitive data such as credit cards, bank statements, and passwords.

  • Document baseline configurations, and ensure all database administrators follow the policies.

  • Encrypt all communications between the database and applications, especially Web-based programs.

  • Match internal patch cycles to vendor release patterns.

  • Make consistent backups of critical data, and protect the backup files with database encryption.

  • Create an action plan to implement if data is lost or stolen. In the current computing environment, it is better to think in terms of when this could happen, not if it will happen.

Basic database security seems logical and obvious. However, the repeated occurrences of major and minor data breaches in organizations of all sizes indicate that company leadership, IT personnel, and database administrators are not doing all they can to implement consistent database security principles.

The cost to do otherwise is too great. Increasingly, corporate America is turning to cloud-based enterprise software. Many of today’s popular applications like Facebook, Google and Amazon rely on advanced databases and high-level computer languages to handle millions of customers accessing their information at the same time. In our next article, we take a closer look at advanced database security methods that these companies and other forward-thinking organizations use to protect their data and prevent hackers, crackers, and thieves from making off with millions of dollars worth of information. 


Database Monitoring for MariaDB and Percona Server

Both MariaDB and Percona Server are forks of MySQL and strive to be drop in replacements for MySQL from a binary, api compatibility, and command line perspective.

It’s great to have an alternative to MySQL since you never know what might happen to it given that Oracle bought it for 1 billion dollars. In this blog post I set out to see if these MySQL forks would work 100% with AppDynamics for Databases. If you’re not familiar with the AppDynamics for Databases product I suggest you take a few minutes to read this other blog post.

The Setup

Getting both MariaDB and Percona Server installed onto test instances was pretty simple. I chose to use 2 Red Hat Enterprise Linux (RHEL) servers running on Amazon Web Services (AWS) for no particular reason other than they were quick and easy to get running. My first step was to make sure that MySQL was gone from my RHEL servers by running “yum remove mysql-server”.

Installing both MariaDB and Percona Server consisted of setting up yum repository files (documented here and here) and running the yum installation commands. This took care of getting the binaries installed so the rest of the process was related to starting and configuring the individual database servers.

The startup command for both MariaDB and Percona Server is “/etc/init.d/mysql start” so you can see that these products really do strive for direct drop in adherence to MySQL. As you can see in the screen grabs below I ended up running MariaDB 10.0.3 and Percona Server 5.5.31-30.3.

Screen Shot 2013-07-01 at 2.21.48 PM Screen Shot 2013-07-01 at 2.23.30 PM

Connected to each of these databases were 1 instance of WordPress and 1 instance of Drupal in a nearly “out of the box” configuration besides adding a couple of new posts to each CMS to help drive a small amount of load. I didn’t want to set up a load testing tool so I induced a high disk I/O load on each server by running the UNIX command “cat /dev/zero > /tmp/zerofile”. This command pumps the number 0 into that file as fast as it can basically crushing the disk. (Use Ctrl-C to kill this command before you fill up your disk.)

The Monitoring

Getting the monitoring set up was really easy. I used a test instance of AppDynamics for Databases to remotely monitor each database instance (yep, no agent install required). To initiate monitoring I opened up my AppDynamics for Databases console, navigated to the agent manager, clicked the “add agent” button, and filled in the fields as shown below (I selected MySQL as the database type):

Screen Shot 2013-07-01 at 3.39.47 PM

My remote agent didn’t connect the first time I tired this because I forgot to configure iptables to let my connection through even though I had set up my AWS firewall rules properly (facepalm). After getting iptables out of the way (I just turned it off since these were test instances) my database monitoring connections came to life and I was off and running.

The Result

Taking a look at all of the data pouring into AppDynamics for Databases I can see that it is 100% compatible with MariaDB and Percona Server. There are no errors being thrown and the data is everything that it should be.

The beauty of my induced disk I/O load was that just by clicking around the web interface of WordPress and Drupal I was getting slow response times. That always makes data more interesting to look at. So here are some screen grabs for each database type for you to check out…

MariaDB Activity

AppDynamics for Databases activity screen for the MariaDB database.

Percona Activity

AppDynamics for Databases activity screen for the Percona database.

MariaDB Explain Statement

Explain statement for a select statement in MariaDB.

Percona Explain Statement

Explain statement for a select statement in Percona.

MariaDB Statistics

A couple of statistics charts for MariaDB.

Percona Statistics

A couple of statistics charts for Percona.

If you’re currently running MySQL you might want to check out MariaDB and Percona Server. It’s possible that you might see some performance improvements since the storage engine for MariaDB and Percona is XtraDB as opposed to MySQL’s InnoDB. Having choices in technology is a great thing. Having a unified monitoring platform for your MySQL, MariaDB, Percona Server, Oracle, SQL Server, Sybase, IBM DB2, and PostgreSQL database is even better. Click here to get started with your free trial of AppDynamics for Databases today.

How To Set Up and Monitor Amazon RDS Databases

Relational databases are still an important application component even in today’s modern application architectures. There is usually at least one relational database lurking somewhere within the overall application flow and understanding the behavior of these databases is major factor in rapidly troubleshooting application problems. In 2009, Amazon launched their RDS service which basically allows anyone to spin up a MySQL, Oracle, or MS-SQL instance whenever the urge strikes.

While this service is amazingly useful there are also some drawbacks:

  1. You cannot login and access the underlying OS of your database instance. This means that you can’t use any agent based monitoring tools to get the visibility you really want.
  2. The provided CloudWatch monitoring metrics are high level statistics and not helpful in troubleshooting SQL issues.

The good news is that you can monitor all of your Amazon RDS instances using AppDynamics for Databases (AppD4DB) and in this article I will show you how. If you’re unfamiliar with AppD4DB click here for an introduction.

Setting Up A Database Instance In RDS

Creating a new database instance in RDS is really simple.

Step 1, login to your Amazon AWS account and open the RDS interface.

 RDS 1

Step 2, Initiate the “Launch a DB Instance” workflow.


Step 3, select the type of instance you want to launch. In this case we will use MySQL but I did test Oracle and MS-SQL too.


Step 4, fill in the appropriate instance details. Pay attention to the master user name and password as we will use those later when we create our monitoring configuration (although we could create a new user only for monitoring if we want).


Step 5, finish the RDS workflow. Notice I called the database “wordpress” as I will use it to host a WordPress instance. Also notice that we chose to use the “default” DB security group. You will need to access the security group settings after your new instance is created so that you allow access to the database from the internet. For the sake of testing I opened up my database to (not shown in this workflow) which allows the entire internet to connect to my database if they have the credentials. You should be much more selective if you have a real database instance with production applications connected.




Step 6, wait for your instance to be created and watch for the “available” status. When you click on the database instance row you will see the details populate in the “Description” tab below. We will use the “Endpoint” information to connect AppD4DB to our new instance. (At this point you can also build the database structure and connect your application to your running instance.)


Monitor Your Database With AppD4DB

Step 1, enable database monitoring from the “Agent Manager” tab in AppD4DB. Notice we map RDS “Endpoint” to AppD4DB “Hostname or IP Address” and in this case we are using the RDS “Master Username” and “Master Password” for “Username” and “Password” in AppD4DB. Also, since Amazon does not allow any access to the associated OS (via SSH or any other method) we cannot enable OS monitoring.


Step 2, start your new database monitoring and use your application. Here is a screen grab showing a couple of slow SQL queries.

RDS SQL Activity

The Results

So here is what I found for each type of database offered by Amazon RDS.

  • MySQL: Fully functional database monitoring.
  • Oracle: Fully functional database monitoring.
  • MS-SQL: All database monitoring functionality works except for File I/O Statistics. This means that we are 99% functional and capture everything else as expected including the ability to show SQL execution plans.

Amazon RDS makes it fast and easy to stand up MySQL, MS-SQL and Oracle databases. AppDynamics for Databases makes it fast and easy to monitor your RDS databases at the level required to solve your application and database problems. Sounds like a perfect match to me. Sign up for your free trial of AppD4DB and see for yourself today.