151-200 of 10000 results (48ms)
2019-03-14 ยง
16:08 <arturo> reimaging cloudvirt1015 again [production]
16:04 <akosiaris> reboot one final time all sessionstore[12]00[123] servers [production]
16:02 <arturo> T216497 drop python-dogpile.cache from jessie-wikimedia/openstack-mitaka-jessie [production]
14:57 <marostegui> Start replication on db2070 after testing url_notes [production]
14:53 <mutante> analytics-tool1003 - stopping idle screen session [production]
14:43 <marostegui> Stop replication on db2070 to test the url_notes (will alert only on IRC) [production]
14:21 <otto@deploy1001> scap-helm eventgate-analytics finished [production]
14:21 <otto@deploy1001> scap-helm eventgate-analytics cluster eqiad completed [production]
14:21 <otto@deploy1001> scap-helm eventgate-analytics upgrade production -f eventgate-analytics-eqiad-values.yaml --set main_app.version=v1.0.3-wmf0 stable/eventgate-analytics [namespace: eventgate-analytics, clusters: eqiad] [production]
14:09 <otto@deploy1001> scap-helm eventgate-analytics finished [production]
14:09 <otto@deploy1001> scap-helm eventgate-analytics cluster staging completed [production]
14:09 <otto@deploy1001> scap-helm eventgate-analytics upgrade staging -f eventgate-analytics-staging-values.yaml stable/eventgate-analytics [namespace: eventgate-analytics, clusters: staging] [production]
13:54 <godog> take a snapshot of data on prometheus2004 [production]
13:50 <arturo> reimaging cloudvirt1015 [production]
13:36 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1081 into API (duration: 00m 48s) [production]
13:15 <arturo> T216497 drop libpulse0 from jessie-wikimedia/openstack-mtiaka-jessie [production]
13:15 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1081 into API (duration: 00m 49s) [production]
13:10 <arturo> T216497 drop python-mysqldb from jessie-wikimedia/openstack-mtiaka-jessie [production]
13:10 <zfilipin@deploy1001> rebuilt and synchronized wikiversions files: all wikis to 1.33.0-wmf.21 [production]
12:50 <jiji@cumin1001> conftool action : set/pooled=yes; selector: dc=codfw,service=cxserver,cluster=scb,name=kubernetes.* [production]
12:49 <jiji@cumin1001> conftool action : set/weight=1; selector: dc=codfw,service=cxserver,cluster=scb,name=kubernetes.* [production]
12:42 <jijiki> Rump up k8s cxserver traffic to 8% - T213195 [production]
12:22 <jiji@cumin1001> conftool action : set/pooled=yes; selector: dc=eqiad,service=cxserver,cluster=scb,name=kubernetes.* [production]
12:21 <jiji@cumin1001> conftool action : set/weight=1; selector: dc=eqiad,service=cxserver,cluster=scb,name=kubernetes.* [production]
12:17 <jijiki> Send ~4% of cxserver traffic to eqiad k8s - T213195 [production]
12:14 <zeljkof> EU SWAT finished [production]
12:13 <kartik@deploy1001> Synchronized wmf-config: SWAT: [[gerrit:496418]] Revert "Correct the enable context detection configuration" (duration: 00m 56s) [production]
12:12 <arturo> T216497 drop some packages from jessie-wikimedia/openstack-mtiaka-jessie: qemu-XXX [production]
12:06 <arturo> T216497 drop some packages from jessie-wikimedia/openstack-mtiaka-jessie: libvirt*, librados2, librbd1, because they induce the resolver to conflict with those included in stretch [production]
12:02 <kartik@deploy1001> Synchronized wmf-config: SWAT: Revert [[gerrit:496412]] Fix content detection config (duration: 00m 56s) [production]
11:58 <kartik@deploy1001> scap failed: average error rate on 3/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/db09a36be5ed3e81155041f7d46ad040 for details) [production]
11:45 <kartik@deploy1001> Synchronized php-1.33.0-wmf.21/skins/MinervaNeue: SWAT: [[gerrit:496364|Ensure page-actions icons are `display:block` (T218182) (duration: 00m 57s) [production]
11:15 <kartik@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:493672]] Enable ExternalGuidance to all Wikipedias (T216129) (duration: 00m 57s) [production]
10:57 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1081 (duration: 00m 57s) [production]
10:50 <ema@puppetmaster1001> conftool action : set/pooled=yes; selector: name=cp2002.codfw.wmnet,service=varnish-fe [production]
10:50 <ema@puppetmaster1001> conftool action : set/pooled=yes; selector: name=cp2002.codfw.wmnet,service=nginx [production]
10:50 <ema> cp2002: pool varnish-fe to resume ATS testing T213263 [production]
10:44 <moritzm> installing libsdl1.2 security updates for jessie [production]
10:21 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1081 (duration: 00m 58s) [production]
09:54 <hashar> ci: live hacked job https://integration.wikimedia.org/ci/job/quibble-vendor-mysql-hhvm-docker/ in attempt to capture 'core' files from hhvm | https://gerrit.wikimedia.org/r/#/c/integration/config/+/496392/ | T216689 [production]
09:02 <mutante> ms-be2037 - down since a couple hours, no SAL or ticket, powercycling [production]
08:44 <marostegui> Deploy schema change on s4 codfw master (db2051), this will generate lag on codfw [production]
08:26 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1088 (duration: 00m 53s) [production]
08:21 <marostegui> Upgrade s3 codfw master (db2043) there will be lag on s3 codfw [production]
08:10 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: More traffic to db1088 (duration: 00m 55s) [production]
07:54 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1088 (duration: 00m 55s) [production]
07:48 <akosiaris@deploy1001> scap-helm cxserver finished [production]
07:48 <akosiaris@deploy1001> scap-helm cxserver cluster codfw completed [production]
07:48 <akosiaris@deploy1001> scap-helm cxserver upgrade -f cxserver-codfw-values.yaml production stable/cxserver [namespace: cxserver, clusters: codfw] [production]
07:42 <marostegui> Upgrade db1088 [production]