3151-3200 of 10000 results (65ms)
2019-04-08 ยง
18:12 <otto@deploy1001> scap-helm eventgate-analytics upgrade production -f eventgate-analytics-codfw-values.yaml --reset-values stable/eventgate-analytics [namespace: eventgate-analytics, clusters: codfw] [production]
18:10 <otto@deploy1001> scap-helm eventgate-analytics upgrade production -f eventgate-analytics-codfw-values.yaml --reset-values stable/eventgate-analytics [namespace: eventgate-analytics, clusters: codfw] [production]
18:09 <otto@deploy1001> scap-helm eventgate-analytics finished [production]
18:09 <otto@deploy1001> scap-helm eventgate-analytics cluster staging completed [production]
18:09 <otto@deploy1001> scap-helm eventgate-analytics upgrade staging -f eventgate-analytics-staging-values.yaml --reset-values stable/eventgate-analytics [namespace: eventgate-analytics, clusters: staging] [production]
18:06 <mobrovac@deploy1001> Started deploy [restbase/deploy@9cf5364]: Lower AQS rate limits and fix recommendation-api spec - T219910 T220221 [production]
17:50 <arturo> T220129 renaming labtestmetal2001.codfw.wmnet to clouddb2001-dev.codfw.wmnet [production]
17:42 <XioNoX> add swift term to cr1/2-eqiad - T220081 [production]
17:14 <onimisionipe@deploy1001> Finished deploy [wdqs/wdqs@c30a540]: GUI updates, Updater with redirect fix and Blazegraph with XSS fix (duration: 11m 17s) [production]
17:03 <onimisionipe@deploy1001> Started deploy [wdqs/wdqs@c30a540]: GUI updates, Updater with redirect fix and Blazegraph with XSS fix [production]
16:59 <mobrovac@deploy1001> Finished deploy [mobileapps/deploy@64f09a0]: Force-deploy to scb1001 to test the config perms (duration: 00m 16s) [production]
16:59 <mobrovac@deploy1001> Started deploy [mobileapps/deploy@64f09a0]: Force-deploy to scb1001 to test the config perms [production]
16:55 <mholloway-shell@deploy1001> Synchronized wmf-config/InitialiseSettings-labs.php: Replace needed WikimediaEditorTasks Beta Cluster config (T220153) (duration: 00m 58s) [production]
16:31 <urandom> bootstrapping cassandra-a, restbase2019 -- T208087 [production]
15:35 <herron> aborting ores to logstash kafka logging pipeline switchover for now. puppet applied only to ores2009, reverting now [production]
15:19 <herron> switching ores to logstash kafka logging pipeline (via temporary puppet disable and rolling puppet agent runs) [production]
15:09 <jijiki> Pool mw2206 - T215415 [production]
14:55 <papaul> powering down mw2206 for DIMM replacement [production]
14:49 <otto@deploy1001> Finished deploy [analytics/refinery@7fa6fb7]: deploying oozie article recommender for baho (duration: 18m 35s) [production]
14:45 <papaul> powering down elastic2048 for disk replacement [production]
14:30 <otto@deploy1001> Started deploy [analytics/refinery@7fa6fb7]: deploying oozie article recommender for baho [production]
14:17 <anomie@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Setting actor migration to write-both/read-new on test wikis and mediawikiwiki (T188327) (duration: 00m 59s) [production]
14:06 <jijiki> Temporarily serve thumbor traffic on thumbor1001 via haproxy - T187765 [production]
13:41 <moritzm> upgrading job runners in codfw to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
12:31 <hashar> contint2001: upgraded python-pbr 0.8.2-1 -> 1.10.0-1 # T218559 [production]
12:25 <moritzm> upgrading API servers in codfw to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
12:06 <arturo> reboot cloudvirt1009 to clean some ACPI errors in dmesg [production]
12:03 <arturo> T219776 puppet node deactivate labtestnet2003.codfw.wmnet [production]
12:00 <hashar> contint1001 upgraded zuul to 2.5.1-wmf6 # T208426 [production]
11:53 <hoo@deploy1001> Synchronized wmf-config/Wikibase.php: WikibaseClient: Conditionally enable mapframe support (T218051) (duration: 00m 58s) [production]
11:48 <hashar> contint2001: stopping zuul-server , it is not meant to be running there [production]
11:41 <hoo@deploy1001> Synchronized wmf-config/abusefilter.php: Enable blocking feature of AbuseFilter in zh.wikipedia (T210364) (duration: 00m 58s) [production]
11:25 <hoo@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Create uploader user group for thwiki (T216615) (duration: 00m 58s) [production]
11:12 <jijiki> Restarted thumbor services after librsvg upgrade [production]
11:11 <fsero> upgrading envoy to 1.9.1 T215810 [production]
10:42 <jdrewniak@deploy1001> Synchronized portals: Wikimedia Portals Update: [[gerrit:502190| Bumping portals to master (T128546)]] (duration: 00m 58s) [production]
10:41 <jdrewniak@deploy1001> Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:502190| Bumping portals to master (T128546)]] (duration: 00m 59s) [production]
10:34 <moritzm> upgrading app servers in codfw to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
10:23 <jijiki> Running debdeploy to upgrade librsvg [production]
09:43 <gehel> force allocation of 3 unassigned shards on elasticsearch / cirrus / eqiad [production]
09:30 <arturo> T219776 puppet node clean labtestnet2003.codfw.wmnet [production]
09:19 <volans> restarting icinga on icinga1001 - T196336 [production]
08:45 <moritzm> upgrading API servers mw1221-mw1235 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
08:34 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:34 <akosiaris@deploy1001> scap-helm zotero cluster staging completed [production]
08:34 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-staging.yaml --reset-values staging stable/zotero [namespace: zotero, clusters: staging] [production]
08:32 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:32 <akosiaris@deploy1001> scap-helm zotero cluster eqiad completed [production]
08:32 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-eqiad.yaml production stable/zotero [namespace: zotero, clusters: eqiad] [production]
08:32 <akosiaris> lower CPU, memory limits for zotero pods. Set 1 cpu, 700Mi. This should help the pods to recover faster in some cases. The old memory leak issues we used to have seem to be no longer present [production]