2019-04-08
§
|
14:06 |
<jijiki> |
Temporarily serve thumbor traffic on thumbor1001 via haproxy - T187765 |
[production] |
13:41 |
<moritzm> |
upgrading job runners in codfw to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
12:31 |
<hashar> |
contint2001: upgraded python-pbr 0.8.2-1 -> 1.10.0-1 # T218559 |
[production] |
12:25 |
<moritzm> |
upgrading API servers in codfw to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
12:06 |
<arturo> |
reboot cloudvirt1009 to clean some ACPI errors in dmesg |
[production] |
12:03 |
<arturo> |
T219776 puppet node deactivate labtestnet2003.codfw.wmnet |
[production] |
12:00 |
<hashar> |
contint1001 upgraded zuul to 2.5.1-wmf6 # T208426 |
[production] |
11:53 |
<hoo@deploy1001> |
Synchronized wmf-config/Wikibase.php: WikibaseClient: Conditionally enable mapframe support (T218051) (duration: 00m 58s) |
[production] |
11:48 |
<hashar> |
contint2001: stopping zuul-server , it is not meant to be running there |
[production] |
11:41 |
<hoo@deploy1001> |
Synchronized wmf-config/abusefilter.php: Enable blocking feature of AbuseFilter in zh.wikipedia (T210364) (duration: 00m 58s) |
[production] |
11:25 |
<hoo@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Create uploader user group for thwiki (T216615) (duration: 00m 58s) |
[production] |
11:12 |
<jijiki> |
Restarted thumbor services after librsvg upgrade |
[production] |
11:11 |
<fsero> |
upgrading envoy to 1.9.1 T215810 |
[production] |
10:42 |
<jdrewniak@deploy1001> |
Synchronized portals: Wikimedia Portals Update: [[gerrit:502190| Bumping portals to master (T128546)]] (duration: 00m 58s) |
[production] |
10:41 |
<jdrewniak@deploy1001> |
Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:502190| Bumping portals to master (T128546)]] (duration: 00m 59s) |
[production] |
10:34 |
<moritzm> |
upgrading app servers in codfw to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
10:23 |
<jijiki> |
Running debdeploy to upgrade librsvg |
[production] |
09:43 |
<gehel> |
force allocation of 3 unassigned shards on elasticsearch / cirrus / eqiad |
[production] |
09:30 |
<arturo> |
T219776 puppet node clean labtestnet2003.codfw.wmnet |
[production] |
09:19 |
<volans> |
restarting icinga on icinga1001 - T196336 |
[production] |
08:45 |
<moritzm> |
upgrading API servers mw1221-mw1235 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
08:34 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:34 |
<akosiaris@deploy1001> |
scap-helm zotero cluster staging completed |
[production] |
08:34 |
<akosiaris@deploy1001> |
scap-helm zotero upgrade -f zotero-values-staging.yaml --reset-values staging stable/zotero [namespace: zotero, clusters: staging] |
[production] |
08:32 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:32 |
<akosiaris@deploy1001> |
scap-helm zotero cluster eqiad completed |
[production] |
08:32 |
<akosiaris@deploy1001> |
scap-helm zotero upgrade -f zotero-values-eqiad.yaml production stable/zotero [namespace: zotero, clusters: eqiad] |
[production] |
08:32 |
<akosiaris> |
lower CPU, memory limits for zotero pods. Set 1 cpu, 700Mi. This should help the pods to recover faster in some cases. The old memory leak issues we used to have seem to be no longer present |
[production] |
08:31 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:31 |
<akosiaris@deploy1001> |
scap-helm zotero cluster codfw completed |
[production] |
08:31 |
<akosiaris@deploy1001> |
scap-helm zotero upgrade -f zotero-values-codfw.yaml production stable/zotero [namespace: zotero, clusters: codfw] |
[production] |
08:17 |
<godog> |
delete fundraising folder from public grafana - T219825 |
[production] |
08:01 |
<godog> |
bounce grafana after https://gerrit.wikimedia.org/r/c/operations/puppet/+/501519 |
[production] |
07:59 |
<moritzm> |
upgrading mw1266-mw1275 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
07:59 |
<moritzm> |
upgrading mw1266-mw1255 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) |
[production] |
07:24 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T217453 (duration: 00m 58s) |
[production] |
07:19 |
<marostegui> |
Deploy schema change on the first 10 wikis - T217453 |
[production] |
07:18 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T217453 (duration: 00m 59s) |
[production] |
07:02 |
<moritzm> |
installing wget security updates |
[production] |
07:02 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T143763 (duration: 00m 58s) |
[production] |
06:34 |
<_joe_> |
restarted netbox, SIGSEGV on HUP-induced reload |
[production] |
05:20 |
<marostegui> |
Deploy schema change on x1 master with replication, there will be lag on x1 slaves T143763 |
[production] |
05:18 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T219777 T143763 (duration: 01m 30s) |
[production] |