How To Properly Specify Command and Arguments to Exec in Perl

Author: , Posted on Wednesday, December 11th, 2019 at 2:39:49pm

When using Perl’s exec() call, the best way to invoke it is as follows:

This syntax does some important things:
– passes a scalar value as the first argument which exec interprets as PROGRAM
– passes an array as the second argument which exec will use as the arguments to pass into PROGRAM

Note the use of the parentheses, which I have found to be required for proper behavior.

For more information, please see: https://perldoc.perl.org/functions/exec.html

How To Change Your Commit Message in Git

Author: , Posted on Monday, December 2nd, 2019 at 8:15:31am

Not pushed + most recent commit:

How To Force vim to Convert Tabs to Spaces

Author: , Posted on Tuesday, November 26th, 2019 at 12:57:23pm

To create indents with 4 space characters which are entered by pressing the TAB key:

To make the above settings permanent add to your ~/.vimrc file.

For more details on any of these see :help ‘optionname’ in vim (e.g. :help ‘tabstop’)

tabstop
The width of a hard tabstop measured in “spaces” — effectively the (maximum) width of an actual tab character.
shiftwidth
The size of an “indent”. It’s also measured in spaces, so if your code base indents with tab characters then you want shiftwidth to equal the number of tab characters times tabstop. This is also used by things like the =, > and < commands. softtabstop
Setting this to a non-zero value other than tabstop will make the tab key (in insert mode) insert a combination of spaces (and possibly tabs) to simulate tab stops at this width.
expandtab
Enabling this will make the tab key (in insert mode) insert spaces instead of tab characters. This also affects the behavior of the retab command.
smarttab
Enabling this will make the tab key (in insert mode) insert spaces or tabs to go to the next indent of the next tabstop when the cursor is at the beginning of a line (i.e. the only preceding characters are whitespace).

How To Remove the MacOS Quarantine Extended File Attribute for an Entire Directory Tree

Author: , Posted on Sunday, November 24th, 2019 at 9:00:34am

xattr -r -d com.apple.quarantine {dirname}
~or~
xattr -r -d com.apple.quarantine *

How To Prevent WordPress from Applying wpautop to a Page or Post

Author: , Posted on Tuesday, November 19th, 2019 at 1:56:03pm

All credit to Graham Walters for this excellent plugin and post, upon which this post is based – thank you, sir!
https://grahamwalters.me/lab/disable-wpautop-on-specific-postspages/

Recently, a client was having problems using a plugin called by a shortcode. As it turned out, the JavaScript embedded in the returned content was being broken by the WordPress auto-paragraph feature known as wpautop.

Thank to a bit of research, I found the following solution, recorded here in case the original blog post becomes unavailable.

  1. Create the file wp-content/plugins/disable-wpautop/disable_wpautop.php based on the code below the procedure.
  2. WordPress » Admin » Plugins » Activate “Disable wpautop”
  3. Go to the page or post to disable the auto-paragraph feature on
  4. If the Screen Options tool is not visible below, locate the Screen Options button in the upper right corner, open it and check the box labeled Custom Fields.
  5. Locate the Custom Fields tool box below, and inside of it the header “Add New Custom Field:”, and under that click on “Enter New”.
  6. Enter wpautop as the Name, and false as the Value, then click the “Add Custom Field” button.
  7. Click the ‘Update’ button to save the settings.

wp-content/plugins/disable-wpautop/disable_wpautop.php:

How To Enable the WordPress Custom Fields Option Under the Screen Options Menu

Author: , Posted on Tuesday, November 19th, 2019 at 1:30:24pm

Normally, there is an option in the Screen Options menu to show or hide the Custom Fields tool.

If this option is missing, you may have the plugin Advanced Custom Fields (ACF) version 5.5.13 or greater installed and active because ACF removes the Custom Fields tool to improve page load speed.

If you do have ACF active, add the following to your WordPress theme functions.php file to restore the Custom Fields checkbox:

How To Block All Traffic From China Using iptables and ipset on Amazon Linux

Author: , Posted on Thursday, October 24th, 2019 at 8:02:41am

All credit to Matt Wilcox for this excellent article, for which this post is based – thank you, Matt!
https://mattwilcox.net/web-development/unexpected-ddos-blocking-china-with-ipset-and-iptables/

All commands run as root!

Do this once only:

Then add blockchina to the root cron

How To Read Multiple Lines of User Input from the Command Line in Perl

Author: , Posted on Wednesday, October 9th, 2019 at 11:59:50am

Use Case: Export Apple Reminders from MacOS application to cleaned plain text, i.e. remove unwanted spaces, blank lines and square brackets

PROCEDURE

  1. Click on a single item in the MacOS Reminders app list
  2. Select All (either Command-a or use Edit->Select All)
  3. Copy (either Command-c or use Edit->Copy)
  4. Open Terminal
  5. Run cleanme
  6. Paste the copied reminders into the Terminal window, then press Control-D
  7. The script will output the cleaned list as plaintext

EXAMPLE

Here is the Perl code I used to do it:

How To Sync Box to S3 Using rclone

Author: , Posted on Thursday, October 3rd, 2019 at 10:26:21am

To sync various cloud resources, use the excellent cli tool rclone from https://rclone.org/docs/

For this use case, the need was to sync from Box to an AWS S3 bucket.

Install rclone:

Configure both S3 and Box – for remote name labels I just used “S3” and “Box”:

Validate Access and Functionality:

Perform the Sync:

A more advanced example:

Make It Faster: Improving MySQL Write Performance for Tungsten Cluster Slaves

Author: , Posted on Thursday, September 26th, 2019 at 10:59:10am

Overview

The Skinny

In this blog post we explore various options for performance tuning MySQL server for better slave replication performance.

A Tungsten Cluster relies upon the Tungsten Replicator to move events from the master node to the slaves. Once the event has been transferred to the slave as THL on disk, the slave applier will then attempt to write it to the database. The Replicator can only apply events as fast as MySQL allows. If the MySQL server is somehow slow or blocking, then the Replicator will be as well.

A properly-tuned database server in addition to infrastructure and SysAdmin best practices will go quite a long way towards high-performance slave apply.


The Question

Recently, a customer asked us:

During one of our load tests, we had a peak of 60k writes/min, averaging around 40k w/m. During this period, we saw an applied latency of 100-160 seconds. Is it possible to improve the replication latency on the slaves?


The Summary

Where To Look and What For?

When performance tuning, we normally examine the configuration and logs for the following layers:

  • Tungsten Replicator
  • MySQL server
  • Storage
  • Network/Firewall
  • Hardware & OS

The Rules

What are Performance Tuning Best Practices?

Before we dive into the various layers, let’s review the performance tuning basics that I have learned over the years:

  • Performance tuning is a slow, iterative process.
  • Change one thing at a time and test carefully.
  • Document your changes and test results.
  • Go for the low-hanging fruit early on. Sometimes the smallest changes may have the largest results, i.e. adding a proper index on a table.
  • As soon as you remove one bottleneck, the next one will appear. Iterate!
  • Divide and Conquer – pick a spot and ensure everything is working properly on both sides. this often helps in isolating issues.
  • Start at the top and work your way down the layers. Test, test, test!
  • Start at the bottom and work your way up the layers. Test, test, test!
  • When in doubt, ask, then document.
  • Read the documentation. Then read it again.
  • Ensure consistency amongst all nodes at every layer.
  • Most of all, do not assume.

Tungsten Replicator

The Short Version

We always start off by checking the replicator applier stages, of which there are three.

On a slave, if the appliedLatency of the remote-to-thl stage is low, but the appliedLatency of both the thl-to-q and q-to-dbms stages are high, then the issue is almost always with the MySQL server somehow.

For example, on a healthy cluster:

On a Master Node:
shell> trepctl status -name tasks | egrep 'applied|stage'

On a Slave Node:
shell> trepctl status -name tasks | egrep 'applied|stage'

For more information about stages, please visit the following blog post: Mastering Tungsten Replicator Series: Understanding Pipelines and Stages

Quick Tech “Tool Tip”
Here is a handy command to describe the summary progress of each stage on that node, refreshing every two seconds:

MySQL Server

What a Lovely Tune
  • The first thing to do would be to enable the slow query logs just to validate that there is nothing blocking faster queries behind it. Since the applier is single-threaded, just one repeated slow query can create a huge bottleneck.
  • Next, SHOW FULL PROCESSLIST on the slaves during the latency would be helpful to identify what may be locking or blocking.
  • Use the MySQL EXPLAIN command to better understand why queries are slow.
  • Make sure there are no redundant indexes.
  • Does every table have a Primary Key?
  • What type of replication is enabled in MySQL – STATEMENT, MIXED or ROW? For active/passive clusters, we recommend MIXED, and for active/active clusters, use ROW to ensure data integrity across masters. ROW will increase latency due to the sheer volume of data transmitted as compared to STATEMENT or MIXED.
  • Are the tables InnoDB? If so, do you have lots of memory? can you load the tables into RAM and then sync to disk? This could help to avoid the physical SERIAL disk i/o that can create bottlenecks, even if the i/o channel itself is not full
  • Check for the use of Unique indexes, which disable the InnoDB change buffering performance boost. InnoDB has change buffering (previously called the insert buffer), which is a feature to delay building secondary indexes that are not unique, and merge writes. It can boost insert performance by quite a lot, and it’s enabled by default.
  • Finally, how large are the affected tables and schemas in terms of row size and byte size? Can the tables be split? Can the databases be split?

Key MySQL my.cnf Configuration Parameters to Check:

For this specific customer, we identified four (4) “hot” tables in the slow query logs, which turned out to have 2 million or more rows each. This asks the database to perform some i/o intensive operations.
Once the issue was clearly localized to the four tables, the scope narrowed somewhat, implying an indexing, query or disk i/o-based bottleneck.


Storage

Disk Inside and Out
  • Disk I/O is normally the slowest part of any compute workflow because disk is often much slower than physical memory. No matter how fast the underlying SSD is, there are physical limitations to overcome, especially with SAN and Cloud-based storage.
  • For example, when using AWS EC2, consider using enhanced EBS I/O with provisioned IOPS if you are not already doing so, which would provide faster performance at an increased cost.
  • Not having separated volumes for different purposes will always degrade performance because of disk contention. For example, ensure separate volumes for (at least) root, mysql data, mysql binlogs and THL files.
  • Then stripe each filesystems for MySQL over multiple volumes using RAID of some sort (i.e. RAID-0), so that there are multiple I/O channels in use at the same time for the same filesystem.
  • This layered disk architecture provides for a large number of parallel disk I/O channels, giving a much higher throughput at a much lower latency.
  • What is the filesystem type? (i.e. xfs, ext4, etc…) because journaling filesystems are very slow. Consider using a non-journaling file system, or disabling journaling. We suggest using xfs.
  • Mount the filesystem using the -noatime option

Network/Router/Firewall

The Network Really IS the Computer
  • Ensure sufficient bandwidth, and a very low error rate.
  • Confirm the various hops have sufficient cpu, ram and bandwidth
  • Is the firewall able to keep up?

Operating System

SysAdmins to the Rescue, As Usual
  • Is there enough CPU?
  • Is there enough RAM?
  • Check nproc and ofiles limits

Advanced Tungsten Replicator

The Parallel (Apply) Universe

After making sure that all reasonable efforts have been made to properly evaluate the above solutions, there is an advanced feature available to try: Parallel Apply.

Tungsten Replicator by default uses a single-threaded applier, so it can get about 10,000 updates per second maximum, depending on the round trip time, and so on. Since MySQL server is multi-threaded, the master is able to write faster than the slaves.

To increase that limit, Tungsten Replicator offers Parallel Apply, which employs multiple replication apply threads, one per shard.

https://docs.continuent.com/tungsten-clustering-6.0/deployment-parallel.html

By default, we shard by database, with one shard created per database schema. If there is a cross-database query, all other threads block until that one completes, slowing performance down dramatically.

This means Parallel apply is best suited for environments that equally busy writes for every database. Having many databases, but only one or two as hot defeats the design and purpose of Parallel Apply.

Again, any cross-shard query will force the Replicator back into single-threaded mode to ensure data integrity, with the result of having no performance gain, or even degradation as now the Replicator has to keep switching modes.

Tungsten Replicator can also shard by table, with the same caveats, but this time cross-TABLE queries will block other threads, making this somewhat less useful than desired based on most query designs.

http://docs.continuent.com/tungsten-replicator-6.1/filters-reference-shardbytable.html

Important questions before implementing Parallel Apply:

  • Do you have just one database schema or multiple?
  • If single, do the bulk of your queries cross tables or use single tables?
  • If multiple databases, are they de-coupled or do you do a bunch of cross-database queries?

The Answer

What WAS the Solution After All?

For this customer, the following changes improved performance to acceptable levels:

  • The THL was being written to the same location as the MySQL binary logs, creating contention. Moving the THL directory to a different location (/logs) improved performance.
  • Because the nodes are AWS EC2 instances, converting volumes from GP2 to Provisioned IOPS allowed the slave to keep up with replication much better.
  • Several MySQL configuration values were modified, which also decreased the applied latency on the slaves:


The Library

Please read the docs!

For more information about how to use the Tungsten Replicator, please see Mastering Tungsten Replicator Series: Command Line Superpowers

For more in-depth discussion about MySQL database tuning, here are some excellent blog posts by Morgan Tocker:
http://www.tocker.ca/2013/09/17/what-to-tune-in-mysql-56-after-installation.html
http://www.tocker.ca/2013/05/06/when-does-mysql-perform-io.html

For more technical information about Tungsten clusters, please visit https://docs.continuent.com


Summary

The Wrap-Up

Clearly, there are many, many things to think about when it comes to MySQL performance tuning – this blog post barely scratches the surface of the subject. Remember, performance tuning is all about iterative effort over time!

Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.