GitHub puppet-proxysql
Puppet module to configure ProxySQL

Repo Checks ( 17 of 26 successfull )
Metadata Valid
No translation
passed
Correct Puppet Version Range
Supported Puppet version range is %{PUPPET_SUPPORT_RANGE}
passed
With Puppet Version Range
Puppet version range is present in requirements in metadata.json
passed
With Operatingsystem Support
No translation
passed
Operatingsystems
No translation
passed
Supports Only Current Amazon
No translation
passed
Supports Latest Amazon
No translation
failed
Supports Only Current Redhat
No translation
passed
Supports Latest Redhat
No translation
failed
Supports Only Current Centos
No translation
passed
Supports Latest Centos
No translation
failed
Supports Only Current Oraclelinux
No translation
passed
Supports Latest Oraclelinux
No translation
failed
Supports Only Current Scientific
No translation
passed
Supports Latest Scientific
No translation
failed
Supports Only Current Debian
No translation
failed
Supports Latest Debian
No translation
failed
Supports Only Current Ubuntu
No translation
passed
Supports Latest Ubuntu
No translation
failed
In Modulesync Repo
Is listed as a module managed using modulesync_config
passed
Synced
Has a .msync.yml file
passed
Latest Modulesync
Has been synchronized with the latest tagged version of modulesync_config
failed
Has Modulesync
Is present in voxpupuli/modulesync_config/managed_modules.yml
passed
Released
Is in modulesync_config and in forge releases.
passed
Valid Sync File
If a (optional) sync file is present, it must not contain a `.travis.yml` entry.
passed
Reference Dot Md
The repository has a REFERENCE.md. It needs to be generated / puppet-strings documentation is missing.
passed

Open Pull Requests

PS-10287 Improve support for custom my.cnf
tests-fail

I adapted the Code from https://github.com/voxpupuli/puppet-proxysql/pull/140 and did a modification for resolving dependency on fresh installation.

Open PR in GitHub
modulesync 5.3.0
modulesync

modulesync 5.1.0

Open PR in GitHub
Gtid tracking support
needs-tests
tests-fail
merge-conflicts

Pull Request (PR) description

ProxySQL 2.0.1+ includes a number of schema changes to support GTID tracking for casual consistency reads. The proxy_mysql_server and proxy_mysql_server_no_hostgroup types currently are not aware of the new gtid_port column in the mysql_servers table.

This PR adds 1) a function to the Proxysql parent provider class that indicates if the installed ProxySQL version has GTID tracking support (has_gtid_tracking?), which is based off the proxysql_version fact, and 2) support for gtid_port on Proxysql::Server types.

The changes are also backwards compatible. Older and unknown versions ignore the gtid_port parameter.

This Pull Request (PR) fixes the following issues

n/a

Open PR in GitHub
fix a bug related to user update failed in case of proxysql cluster
bug
needs-feedback

let's suppose that you have a proxysql cluster which had 2 nodes configured like this:
puppet
class { 'proxysql':
mysql_servers => [
{
'db1' => {
'port' => 3306,
'hostgroup_id' => 1,
}
},
{
'db2' => {
'hostgroup_id' => 2,
}
},
],
cluster_name => 'test',
mysql_users => [
{
'app' => {
'password' => '*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB',
'default_hostgroup' => 1,
}
},
{
'ro' => {
'password' => mysql_password('MyReadOnlyUserPassword'),
'default_hostgroup' => 2,
}
},
]
}

on the first node you will get

Admin> select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 1 | 1 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 1 | 1 | 10000 |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
2 rows in set (0.00 sec)

but on the second node users get duplicated - you will have separate users for frontend and backend (see https://github.com/sysown/proxysql/issues/1580)

Admin> select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 0 | 1 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 0 | 1 | 10000 |
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 1 | 0 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 1 | 0 | 10000 |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
4 rows in set (0.00 sec)

if you will now change user configuration on the second node (where you have duplicated users), for example add
puppet
mysql_users => [
{
'app' => {
'password' => '*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB',
'default_hostgroup' => 1,
' transaction_persistent' => 0
}
},

you will get following error:

Error: /Stage[main]/Proxysql::Configure/Proxy_mysql_user[app]: Could not evaluate: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -e UPDATE mysql_users SET `frontend` = '1' WHERE username = 'app'' returned 1: ERROR 1045 (#2800) at line 1: UNIQUE constraint failed: mysql_users.username, mysql_users.frontend
Error: /Stage[main]/Proxysql::Configure/Proxy_mysql_user[ro]: Could not evaluate: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -e UPDATE mysql_users SET `transaction_persistent` = '0', `frontend` = '1' WHERE username = 'ro'' returned 1: ERROR 1045 (#2800) at line 1: UNIQUE constraint failed: mysql_users.username, mysql_users.frontend

this is becase in Proxysql primary key is not username, but username+backend. And as we have default value for this column defined in proxysql_user resource

backend = @resource.value(:backend) || 1
frontend = @resource.value(:frontend) || 1

, puppet tries to upgrade each of users to change to defaults values and got constraint violation.

This pull request fixes this issue. It removes default values from frontend and backend columns (because they already have defaults defined in proxysql database), so this value won't be updated every puppet run if it is not explicitly configured.

Open PR in GitHub
modulesync 5.4.0
modulesync

modulesync 5.4.0

Open PR in GitHub
support purging for all types of resources
needs-tests

Hello,

right now in some resources we have
puppet
validate do
raise('name parameter is required.') if (self[:ensure] == :present) && self[:name].nil?
raise('hostname parameter is required.') if (self[:ensure] == :present) && self[:hostname].nil?
raise('port parameter is required.') if (self[:ensure] == :present) && self[:port].nil?
end

so the error is raised only in case if the resource should be present, but for some, it is not true.
So let's make this equal for all resource types and support purging for all of them

Open PR in GitHub