2019-04-08
§
|
09:30 |
<arturo> |
T219776 puppet node clean labtestnet2003.codfw.wmnet |
[production] |
09:19 |
<volans> |
restarting icinga on icinga1001 - T196336 |
[production] |
08:45 |
<moritzm> |
upgrading API servers mw1221-mw1235 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
08:34 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:34 |
<akosiaris@deploy1001> |
scap-helm zotero cluster staging completed |
[production] |
08:34 |
<akosiaris@deploy1001> |
scap-helm zotero upgrade -f zotero-values-staging.yaml --reset-values staging stable/zotero [namespace: zotero, clusters: staging] |
[production] |
08:32 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:32 |
<akosiaris@deploy1001> |
scap-helm zotero cluster eqiad completed |
[production] |
08:32 |
<akosiaris@deploy1001> |
scap-helm zotero upgrade -f zotero-values-eqiad.yaml production stable/zotero [namespace: zotero, clusters: eqiad] |
[production] |
08:32 |
<akosiaris> |
lower CPU, memory limits for zotero pods. Set 1 cpu, 700Mi. This should help the pods to recover faster in some cases. The old memory leak issues we used to have seem to be no longer present |
[production] |
08:31 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:31 |
<akosiaris@deploy1001> |
scap-helm zotero cluster codfw completed |
[production] |
08:31 |
<akosiaris@deploy1001> |
scap-helm zotero upgrade -f zotero-values-codfw.yaml production stable/zotero [namespace: zotero, clusters: codfw] |
[production] |
08:17 |
<godog> |
delete fundraising folder from public grafana - T219825 |
[production] |
08:01 |
<godog> |
bounce grafana after https://gerrit.wikimedia.org/r/c/operations/puppet/+/501519 |
[production] |
07:59 |
<moritzm> |
upgrading mw1266-mw1275 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
07:59 |
<moritzm> |
upgrading mw1266-mw1255 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
07:24 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T217453 (duration: 00m 58s) |
[production] |
07:19 |
<marostegui> |
Deploy schema change on the first 10 wikis - T217453 |
[production] |
07:18 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T217453 (duration: 00m 59s) |
[production] |
07:02 |
<moritzm> |
installing wget security updates |
[production] |
07:02 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T143763 (duration: 00m 58s) |
[production] |
06:34 |
<_joe_> |
restarted netbox, SIGSEGV on HUP-induced reload |
[production] |
05:20 |
<marostegui> |
Deploy schema change on x1 master with replication, there will be lag on x1 slaves T143763 |
[production] |
05:18 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T219777 T143763 (duration: 01m 30s) |
[production] |
2019-04-05
§
|
23:10 |
<thcipriani> |
revert some recent problematic gerrit acl changes |
[production] |
22:46 |
<chaomodus> |
restarted pdfrender on scb1002 T174916 |
[production] |
21:45 |
<hashar> |
thcipriani restarted Gerrit. CI works again # T220243 |
[production] |
21:37 |
<thcipriani> |
restarting gerrit |
[production] |
21:29 |
<hashar> |
CI / Zuul is no more processing events / T220243 |
[production] |
17:29 |
<thcipriani> |
gerrit back on 2.15.11 |
[production] |
17:27 |
<thcipriani> |
restart gerrit |
[production] |
17:26 |
<thcipriani@deploy1001> |
Finished deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 on cobalt (restart incoming) (duration: 00m 11s) |
[production] |
17:26 |
<thcipriani@deploy1001> |
Started deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 on cobalt (restart incoming) |
[production] |
17:25 |
<thcipriani@deploy1001> |
Finished deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 (on gerrit2001 only) (duration: 00m 10s) |
[production] |
17:25 |
<thcipriani@deploy1001> |
Started deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 (on gerrit2001 only) |
[production] |
17:19 |
<krinkle@deploy1001> |
Synchronized php-1.33.0-wmf.23/includes/diff/TextSlotDiffRenderer.php: Ia326c67de28a4e / T220217 (duration: 01m 02s) |
[production] |
17:12 |
<krinkle@deploy1001> |
Synchronized php-1.33.0-wmf.24/includes/diff/TextSlotDiffRenderer.php: Ia326c67de28a4e / T220217 (duration: 01m 00s) |
[production] |
16:02 |
<krinkle@deploy1001> |
Synchronized php-1.33.0-wmf.24/includes/jobqueue/jobs/RefreshLinksJob.php: Ib1ac31365f9c / T220037 (duration: 00m 59s) |
[production] |
15:58 |
<ejegg> |
re-enabled recurring donations queue consumer |
[production] |
15:57 |
<krinkle@deploy1001> |
Synchronized php-1.33.0-wmf.24/extensions/NavigationTiming/: I6b23be850d35c7d19 / T220156 (duration: 01m 00s) |
[production] |
15:51 |
<krinkle@deploy1001> |
Synchronized php-1.33.0-wmf.24/extensions/GlobalBlocking/includes/specials/: I5843cd181ca7d (duration: 01m 02s) |
[production] |
15:08 |
<ejegg> |
upgraded fundraising CiviCRM from 3c55850631 to 83478013a8 |
[production] |
15:01 |
<ejegg> |
disabled recurring donation queue consumer |
[production] |
14:55 |
<papaul> |
powering down restbase2019 and 2020 for relocation |
[production] |