GitHub puppet-proxysql
14
14
13
Puppet module to configure ProxySQL

Metadata Valid
No translation
Correct Puppet Version Range
Supported Puppet version range is %{PUPPET_VERSION_RANGE}
With Puppet Version Range
Puppet version range is present in requirements in metadata.json
With Operatingsystem Support
No translation
Supports Only Current Centos
No translation
Supports Latest Centos
No translation
Supports Only Current Debian
No translation
Supports Latest Debian
No translation
Supports Only Current Ubuntu
No translation
Supports Latest Ubuntu
No translation
In Modulesync Repo
No translation
In Plumbing
Is in plumbing
Has Secrets
Has a .sync.yml file
Synced
Has a .msync.yml file
Latest Modulesync
No translation
Has Modulesync
Is present in voxpupuli/modulesync_config/managed_modules.yml
Released
Is in modulesync_config and in forge releases.
Reference Dot Md
The repository has a REFERENCE.md. It needs to be generated / puppet-strings documentation is missing.

Open Pull Requests

Release 5.0.2
skip-changelog

Fixes #146

Fixup Amazon 2016 issues
bug

Not sure if this fixes all issues that might stop this module working on
Amazon 2016 (which is listed in the metadata.json), but it gets the
tests passing again and is a step in the right direction.

make proxysql folder and config permissions configurable
merge-conflicts

Hello,

Let's make default permissions for Proxysql config files and folder to look more like permissions for other services (like MySQL).

WIP: Move non OS specific defaults out of params.pp
merge-conflicts

And make a start at converting to puppet-strings

support purging for all types of resources

Hello,

right now in some resources we have
puppet
validate do
raise('name parameter is required.') if (self[:ensure] == :present) && self[:name].nil?
raise('hostname parameter is required.') if (self[:ensure] == :present) && self[:hostname].nil?
raise('port parameter is required.') if (self[:ensure] == :present) && self[:port].nil?
end

so the error is raised only in case if the resource should be present, but for some, it is not true.
So let's make this equal for all resource types and support purging for all of them

fix a bug related to user update failed in case of proxysql cluster
bug
needs-feedback

let's suppose that you have a proxysql cluster which had 2 nodes configured like this:
puppet
class { 'proxysql':
mysql_servers => [
{
'db1' => {
'port' => 3306,
'hostgroup_id' => 1,
}
},
{
'db2' => {
'hostgroup_id' => 2,
}
},
],
cluster_name => 'test',
mysql_users => [
{
'app' => {
'password' => '*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB',
'default_hostgroup' => 1,
}
},
{
'ro' => {
'password' => mysql_password('MyReadOnlyUserPassword'),
'default_hostgroup' => 2,
}
},
]
}

on the first node you will get

Admin> select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 1 | 1 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 1 | 1 | 10000 |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
2 rows in set (0.00 sec)

but on the second node users get duplicated - you will have separate users for frontend and backend (see https://github.com/sysown/proxysql/issues/1580)

Admin> select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 0 | 1 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 0 | 1 | 10000 |
| app | *D45AE3CFCDF725E7B8E1AD008208F2B890DE8CA9 | 1 | 0 | 1 | | 0 | 0 | 0 | 1 | 0 | 10000 |
| ro | *26EBF0470CAD1F87FBF1DD6B3F20F97D7EEC3C42 | 1 | 0 | 2 | | 0 | 1 | 0 | 1 | 0 | 10000 |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
4 rows in set (0.00 sec)

if you will now change user configuration on the second node (where you have duplicated users), for example add
puppet
mysql_users => [
{
'app' => {
'password' => '*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB',
'default_hostgroup' => 1,
' transaction_persistent' => 0
}
},

you will get following error:

Error: /Stage[main]/Proxysql::Configure/Proxy_mysql_user[app]: Could not evaluate: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -e UPDATE mysql_users SET `frontend` = '1' WHERE username = 'app'' returned 1: ERROR 1045 (#2800) at line 1: UNIQUE constraint failed: mysql_users.username, mysql_users.frontend
Error: /Stage[main]/Proxysql::Configure/Proxy_mysql_user[ro]: Could not evaluate: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -e UPDATE mysql_users SET `transaction_persistent` = '0', `frontend` = '1' WHERE username = 'ro'' returned 1: ERROR 1045 (#2800) at line 1: UNIQUE constraint failed: mysql_users.username, mysql_users.frontend

this is becase in Proxysql primary key is not username, but username+backend. And as we have default value for this column defined in proxysql_user resource

backend = @resource.value(:backend) || 1
frontend = @resource.value(:frontend) || 1

, puppet tries to upgrade each of users to change to defaults values and got constraint violation.

This pull request fixes this issue. It removes default values from frontend and backend columns (because they already have defaults defined in proxysql database), so this value won't be updated every puppet run if it is not explicitly configured.

PS-10287 Improve support for custom my.cnf

Pull Request (PR) description

  • created fact, proxysql_mycnf_file_name, for identifying custom my.cnf location
  • Using Class['proxysql::admin_credentials] as autorequire instead of hardcoded /root/.my.cnf.
  • This is needed when a custom mycnf location is being used.
Add support for changing stat user's credentials

Pull Request (PR) description

Parametrizes username/password for stats user, much like admin.