I adapted the Code from https://github.com/voxpupuli/puppet-proxysql/pull/140 and did a modification for resolving dependency on fresh installation.
modulesync 5.1.0
ProxySQL 2.0.1+ includes a number of schema changes to support GTID tracking for casual consistency reads. The proxy_mysql_server
and proxy_mysql_server_no_hostgroup
types currently are not aware of the new gtid_port
column in the mysql_servers
table.
This PR adds 1) a function to the Proxysql parent provider class that indicates if the installed ProxySQL version has GTID tracking support (has_gtid_tracking?
), which is based off the proxysql_version
fact, and 2) support for gtid_port
on Proxysql::Server
types.
The changes are also backwards compatible. Older and unknown versions ignore the gtid_port
parameter.
n/a
let's suppose that you have a proxysql cluster which had 2 nodes configured like this:
puppet
class { 'proxysql':
mysql_servers => [
{
'db1' => {
'port' => 3306,
'hostgroup_id' => 1,
}
},
{
'db2' => {
'hostgroup_id' => 2,
}
},
],
cluster_name => 'test',
mysql_users => [
{
'app' => {
'password' => '*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB',
'default_hostgroup' => 1,
}
},
{
'ro' => {
'password' => mysql_password('MyReadOnlyUserPassword'),
'default_hostgroup' => 2,
}
},
]
}
on the first node you will get
Admin> select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 1 | 1 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 1 | 1 | 10000 |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
2 rows in set (0.00 sec)
but on the second node users get duplicated - you will have separate users for frontend and backend (see https://github.com/sysown/proxysql/issues/1580)
Admin> select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 0 | 1 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 0 | 1 | 10000 |
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 1 | 0 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 1 | 0 | 10000 |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
4 rows in set (0.00 sec)
if you will now change user configuration on the second node (where you have duplicated users), for example add
puppet
mysql_users => [
{
'app' => {
'password' => '*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB',
'default_hostgroup' => 1,
' transaction_persistent' => 0
}
},
you will get following error:
Error: /Stage[main]/Proxysql::Configure/Proxy_mysql_user[app]: Could not evaluate: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -e UPDATE mysql_users SET `frontend` = '1' WHERE username = 'app'' returned 1: ERROR 1045 (#2800) at line 1: UNIQUE constraint failed: mysql_users.username, mysql_users.frontend
Error: /Stage[main]/Proxysql::Configure/Proxy_mysql_user[ro]: Could not evaluate: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -e UPDATE mysql_users SET `transaction_persistent` = '0', `frontend` = '1' WHERE username = 'ro'' returned 1: ERROR 1045 (#2800) at line 1: UNIQUE constraint failed: mysql_users.username, mysql_users.frontend
this is becase in Proxysql primary key is not username, but username+backend. And as we have default value for this column defined in proxysql_user resource
backend = @resource.value(:backend) || 1
frontend = @resource.value(:frontend) || 1
, puppet tries to upgrade each of users to change to defaults values and got constraint violation.
This pull request fixes this issue. It removes default values from frontend and backend columns (because they already have defaults defined in proxysql database), so this value won't be updated every puppet run if it is not explicitly configured.
modulesync 5.4.0
Hello,
right now in some resources we have
puppet
validate do
raise('name parameter is required.') if (self[:ensure] == :present) && self[:name].nil?
raise('hostname parameter is required.') if (self[:ensure] == :present) && self[:hostname].nil?
raise('port parameter is required.') if (self[:ensure] == :present) && self[:port].nil?
end
so the error is raised only in case if the resource should be present, but for some, it is not true.
So let's make this equal for all resource types and support purging for all of them