partial table dump using mysqldump

There are times when you only want to take partial dump of a mysql table. You can use mysqldump command with one extra option “–where”. You will have to provide your where clause (partial dump condition) in the “–” where option.

Below is the command to take a partial dump of a table.

mysqldump -u [username] -p  [databasename] [mytablename] --where  " mycolumn = 'somevalue' " > filename.sql

Related Posts:
1. MySQL dump in XML format.
2. MySQL general query logs.

Mysql Dump in XML format

To take dump in XML format you have to provide an extra option in default mysqldump command. This can be useful in cases where you want to migrate your MySQL database to some other database.

You can use below command to take MySQL Database dump in XML format.

mysqldump --xml -u[username] -p[password] [db_name] > filename.xml


mysqldump -X -u[username] -p[password] [db_name] > filename.xml

For example if you have a database test_database with an username as root and password as mypassword the command to take dump will be,

mysqldump --xml -u root -p mypassword test_database > filename.xml

MySQL General query logs

MySQL prints each query in General query logs. This can be helpful in application profiling if you want to see how many and which queries are being run in a transaction and when commit/rollback is executed.

You can enable MySQL general query logs by running below sql command from root user:

SET GLOBAL general_log = 'ON';

By default, logs will be stored at /var/log/query.log ,  If you are getting the “permission error” after running above command.

Edit your mysqld.conf file which is located under /etc/mysql/mysql.conf.d ( This location may be different for your installation )

vi /etc/mysql/mysql.conf.d/mysqld.conf

and change the value of

general_log_file = /var/log/query.log


general_log_file = /var/lib/mysql/your_log_filename.log

In my case default general query log file location was /var/log/query.log & MySQL didn’t had permission to write at that location.

Credits : Chirag Jain

Increase MySQL maximum connection limit

MySQL’s default configuration sets the maximum concurrent connections to 151. If you get a too many connections error when you are trying to connect to MySQL server, this means that all available connections are in use by other clients / Users.

The number of connections permitted is controlled by the max_connections system variable. Its default value is 151, although you can it can accept a max of 152 connection (1 Connection reserved for Super User).  If you need to support more connections, you can set a larger value for this .

MySQL 3.x:

# vi /etc/my.cnf
set-variable = max_connections = 170

MySQL 4.x and 5.x:

# vi /etc/my.cnf
max_connections = 170

Restart MySQL once you’ve made the changes.

 /etc/init.d/mysqld restart 


 service mysqld restart 

You can also change this variable after logging into mysql server as a root user.

max conn var change

change max connections limit from mysql terminal

Find the existing limit :

show variables like ‘max_connections’;

Set the updated maximum connection limit :

SET GLOBAL max_connections = 170;

kill a running query in postgres

There are queries which takes more times to execute. Sometimes you need to stop the queries but you can’t do it simply(Ctrl+c) because queries are coming from application. There are mainly two approaches to kill the query. You can find the process corresponding to the query & kill or you can find the procpid of the query and terminate it from PostgreSQL console. Former approach may not be a great idea cause PostgreSQL thinks there is a crash in the system & restarts the whole PostgreSQL database. for the later approach you need to do the following steps. Get the Status of the running queries. To get the details about all the running on the PostgreSQL,  please execute below query.

SELECT * FROM pg_stat_activity;


SELECT procpid,current_query from pg_stat_activity;

Get procid of the query you want to kill / Stop. Stop the query by below query:

SELECT pg_cancel_backend(procpid);

Kill the database connection by below query:

SELECT pg_terminate_backend(procpid);

MySQL query with the conditions as ‘NOT LIKE IN’

While working with MySQL database i needed to write a query which will exclude rows where a column contains one or more words from a set of words. i.e I want to fetch data from student from table where name doesn’t contain words shashank,john etc.

Initially i was trying to write a query like this  :

select * from TABLE
Column NOT Like IN
('%VALUE1%', '%VALUE2%', '%VALUE3%') 

but i was getting error as IN doesn’t work with Like.

There is two way you can write this query.

Method 1 :

Add multiple Like statement, each for every word to be excluded.

Select * from TABLE
where Column NOT Like '%VALUE1%'
AND Column NOT Like '%VALUE2%'
AND Column NOT Like '%VALUE3%' 

Method 2 :

If you have set of words which you want to include/exclude in search from a particular column. You may want to use regular expression function of mysql.

Exclude set of words from a column :


Search set of words in a column :


Disable an index in postgres

If you have got one index on a table that you would like to temporarily disable you can do following steps.

You can poke the system catalogue to disable an index:

update pg_index set indisvalid = false where indexrelid = 'test_pkey'::regclass

This means that the index won’t be used for queries but will still be updated. It’s one of the flags used for concurrent index building.

If you want to run a single query without using index, you can use following query.

drop index mytable_index;

explain analyze select * from mytable;

I don’t think there is a way to disable just one, though you can do this in a transaction to make recovering from it very simple. You can also disable indexscan to disable all indices.

Reference :

‘ascii’ codec can’t encode characters : ordinal not in range(128)

I work on a system which fetches data from a database (Postgres) using python & writes them into a csv file. Few days ago i came through an error which was showing “ascii codec can’t encode characters : ordinal not in range(128)” whenever i tried to fetch data from a particular date range.

Problem : 

When ever i tried to fetch data of a particular data range (suppose date range between 1-Jan-2013 to 31-Jan-2013) error popped up. although data fetching between other data range was working fine. That clearly meant that problem was in data of that date range only. Actually problem was in few entry of one of the column(s) (email) of my table, which was storing some Non-ASCII value . When python tried to write that value into the csv, it couldn’t process the value & error came up.

Solution :

Either you can make that value (data) unicode in python by  using:

email = [DefectedValue].encode('utf-8').strip()

Or you can find the data actually causing problems and remove them. for finding the defected data you can run following command in your PostgreSQL terminal.

SELECT * FROM YourTableName WHERE EffectedColumn similar to '%[^\x20-\x7e]+%' ;

You can update/delete that tuple which is storing Non-ASCII characters.