Search code examples
mysqlamazon-web-servicesamazon-ec2mysql-cluster

Data nodes cannot connect to MySQL Cluster


I have a problem with my MySQL Cluster running on AWS EC2 instances (3 data nodes and 1 management node, on t2.micro running Ubuntu latest version). I followed this tutorial (with updated version of MySQL Cluster 7.5.5) : https://stansantiago.wordpress.com/2012/01/04/installing-mysql-cluster-on-ec2/

In my.cnf file, I have the following code :

[mysqld]
ndbcluster
datadir=/opt/mysqlcluster/deploy/mysqld_data
basedir=/opt/mysqlcluster/home/mysqlc
port=3306

In config.ini file, I have this code :

[ndb_mgmd]
hostname=<private DNS of master>
datadir=/opt/mysqlcluster/deploy/ndb_data
nodeid=1

[ndbd default]
noofreplicas=3
datadir=/opt/mysqlcluster/deploy/ndb_data

[ndbd]
hostname=<private DNS of slave 1>
nodeid=3

[ndbd]
hostname=<private DNS of slave 2>
nodeid=4

[ndbd]
hostname=<private DNS of slave 3>
nodeid=5

[mysqld]
nodeid=50

Then I start the management node like this :

ndb_mgmd -f /opt/mysqlcluster/deploy/conf/config.ini --initial --configdir=/opt/mysqlcluster/deploy/conf

Everything seems fine, I have no error displayed.

However, when I try to connect a slave to the cluster, with this command :

ndbd -c <private DNS of master>:1186

it fails with this error code :

Unable to connect with connect string: nodeid=0,<private DNS of master>:1186

I tried the command ndb_mgm -e show to see what was going on :

Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 3 node(s)
id=3 (not connected, accepting connect from <private DNS of slave 1>)
id=4 (not connected, accepting connect from <private DNS of slave 2>)
id=5 (not connected, accepting connect from <private DNS of slave 2>)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @<private IP address of master>  (mysql-5.7.17 ndb-7.5.5)

[mysqld(API)]   1 node(s)
id=50 (not connected, accepting connect from any host)

It seems that the management node started on localhost, is this why it fails ? I don't understand why I can't connect the data nodes to the cluster because the configuration files look good.

If somebody has any suggestion, it would be much appreciated. Thank you !


Solution

  • The problem in getting MySQL Cluster to work on EC2 is to ensure that all VMs can communicate over the proper ports.

    This is handled by adding a security group that all VMs are part of. I created a security group called NDB Cluster, this had rules for inbound and outbound TCP traffic to communicate on the following ports: 1186 (NDB Management Server port), 3306 (MySQL Server port), 3316 (extra MySQL Server port), 8081 (MySQL Cluster Auto Installer port), 11860 (MySQL Cluster Data Node port), 33060 (MySQLX port).

    I added a line in config.ini to NDBD DEFAULT section ServerPort=11860 This means that all nodes connect to the data nodes using this port number.

    With all VMs using this security group things worked like a charm, when not set up to communicate between all nodes things didn't work very well at all obviously.