====== [TROUBLESHOOT] Ceph too many pgs per osd ======
^ Documentation ^|
^Name:| [TROUBLESHOOT] Ceph too many pgs per osd |
^Description:| how to solve this "issue" |
^Modification date :|11/04/2019|
^Owner:|dodger|
^Notify changes to:|Owner |
^Tags:|ceph, object storage |
^Scalate to:|The_fucking_bofh|
====== WARNING ======
This documents cover the **TOO MANY** PGs per OSD, not //too few// (documented [[linux:ceph:troubleshooting:too_few_pgs_per_osd|here]])
====== Understanding the problem ======
The solution is perfectly explained here:\\
[[https://stackoverflow.com/questions/39589696/ceph-too-many-pgs-per-osd-all-you-need-to-know]]\\
To keep the solution safe, I'll write here down it.
====== Rawpaste of pool usage ======
ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
/^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
/^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
printf("\n");
printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
for (i in poollist) printf("--------"); printf("----------------\n");
for (i in osdlist) { printf("osd.%i\t", i); sum=0;
for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
for (i in poollist) printf("--------"); printf("----------------\n");
printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'
That's all
\\
Procedure for dropping pool: [[linux:ceph:howtos:howto_remove_pool|[HOWTO] Completely remove a POOL from cluster]]
**Don't drop the pool**, continue reading.
====== The "solution" ======
If you're sure that you need more PG's per os, change the configuration.\\
You must add:
mon pg warn max per osd = ${NEW_PG_PER_OSD_NUMBER}
\\
on your ''ceph.conf'' under the ''[global]'' section.
\\
And update the config worldwide.\\
You can also change the setting without having to restart:
# check
ceph tell 'osd.*' config get mon_max_pg_per_osd
# change
ceph tell 'osd.*' config set mon_max_pg_per_osd ${NEW_PG_PER_OSD_NUMBER}
# check again
ceph tell 'osd.*' config get mon_max_pg_per_osd