3901-3950 of 10000 results (67ms)
2019-05-06 §
09:35 <elukey> restart netbox on netmon1002 (trying to reproduce the segfault) - T212697 [production]
09:03 <godog> upgrade labmon1001 to prometheus 2 - T187987 [production]
06:01 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Give some API traffic to db1093 (duration: 00m 52s) [production]
05:08 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Give some weight to db1093 (duration: 00m 58s) [production]
04:08 <ariel@deploy1001> Finished deploy [dumps/dumps@b4b7733]: reduce sleep time more between wikis for incrs (duration: 00m 05s) [production]
04:08 <ariel@deploy1001> Started deploy [dumps/dumps@b4b7733]: reduce sleep time more between wikis for incrs [production]
2019-05-05 §
14:42 <elukey> restart pdfrender on scb1004 [production]
03:10 <chaomodus> fyi scb* flapping on some endpoints seems to be just noise, there is high load from mobileapi but things appear to be operating normally otherwise, several boxes are in the process of checking md which may account for service lags [production]
02:40 <andrewbogott> restarting mariadb on cloudservices1003 [production]
2019-05-04 §
22:20 <reedy@deploy1001> Synchronized docroot/mediawiki/xml/index.html: Add extra xml namespace links (duration: 01m 06s) [production]
10:38 <ariel@deploy1001> Finished deploy [dumps/dumps@26b52ef]: misc small fixes, reduce sleep time for incr wikis (duration: 00m 09s) [production]
10:38 <ariel@deploy1001> Started deploy [dumps/dumps@26b52ef]: misc small fixes, reduce sleep time for incr wikis [production]
2019-05-03 §
23:50 <thcipriani> gerrit back [production]
23:49 <thcipriani> gerrit restart due to threads piling up [production]
22:09 <XioNoX> clear v4 BGP to AS17451 on cr1-eqsin/cr4-ulsfo [production]
17:16 <arturo> T222148 aborrero@labstore1005:~ $ sudo apt-get install libudev1 udev systemd systemd-sysv libsystemd0 [production]
17:15 <arturo> T222148 aborrero@labstore1004:~ $ sudo apt-get install libudev1 udev systemd systemd-sysv libsystemd0 [production]
17:11 <arturo> T222148 aborrero@labpuppetmaster1002:~ $ sudo apt-get install libudev1 udev systemd systemd-sysv libsystemd0 [production]
17:10 <arturo> T222148 aborrero@labpuppetmaster1001:~ $ sudo apt-get install libudev1 udev systemd systemd-sysv libsystemd0 [production]
17:09 <arturo> T222148 aborrero@labtestpuppetmaster2001:~ $ sudo apt-get install libudev1 udev systemd systemd-sysv libsystemd0 [production]
17:08 <arturo> T222148 drop libudev1 from openstack-mitaka-jessie/jessie-wikimedia (related to T216497) [production]
17:07 <arturo> T222148 drop udev from openstack-mitaka-jessie/jessie-wikimedia (related to T216497) [production]
15:02 <oblivian@puppetmaster1001> conftool action : set/pooled=yes; selector: cluster=parsoid,dc=codfw [production]
15:02 <_joe_> repooling the wtp* servers depooled in codfw for load testing [production]
14:56 <_joe_> repool mw1275 [production]
13:49 <jijiki> Restart npre on proton1001 [production]
12:26 <gehel> replaying 30 minutes of eqiad search traffic on codfw - T221121 [production]
12:21 <ema> cp3038: varnish-backend-restart [production]
11:10 <_joe_> purging opcache on mw1275 [production]
10:47 <ema> pool cp4025 w/ ATS backend T219967 [production]
10:43 <jbond42> T220380 remove zull_2.5.0-8-gcbc7f62-wmf4jessie1 from jessie-wikimedia/thirdparty [production]
10:42 <jbond42> T220380 upload zull_2.5.1-wmf7 to jessie-wikimedia [production]
10:25 <jijiki> Depool mw1275 [production]
10:02 <lucaswerkmeister-wmde@deploy1001> Synchronized php-1.34.0-wmf.3/extensions/WikibaseLexemeCirrusSearch/: [[gerrit:507847|Fix reference to classes that moved (T222347)]] (duration: 00m 55s) [production]
09:49 <ema> depool cp4025 and reimage as upload_ats T219967 [production]
09:49 <oblivian@puppetmaster1001> conftool action : set/pooled=no; selector: cluster=parsoid,dc=codfw,name=wtp201[3-4].* [production]
09:20 <gehel> ban elastic2038 from elastic clusters pending memory issue investigation - T217398 [production]
08:47 <ema> pool cp4024 w/ ATS backend T219967 [production]
08:27 <jynus> starting table recompression on new backup source hosts on eqiad and codfw (stop replication) T220572 [production]
07:45 <ema> depool cp4024 and reimage as upload_ats T219967 [production]
07:16 <ema> cp1089: varnish-backend-restart [production]
05:32 <_joe_> restarting varnish backend on cp1077 [production]
05:05 <oblivian@puppetmaster1001> conftool action : set/pooled=no; selector: cluster=parsoid,dc=codfw,name=wtp201[5-6].* [production]
04:57 <oblivian@puppetmaster1001> conftool action : set/pooled=no; selector: cluster=parsoid,dc=codfw,name=wtp20(1[7-9]|20).* [production]
04:55 <_joe_> progressively depooling parsoid servers in codfw to assess load tolerance [production]
00:32 <mutante> powercycling elastic2038 [production]
00:10 <XioNoX> remove static route to 208.80.155.128/25 on cr1/2-eqiad - T193496 [production]
00:06 <mutante> restarting gerrit to pick up config changes for 2 mail threads and lower timeout (gerrit:507852, gerrit: 507853) [production]
2019-05-02 §
22:10 <jforrester@deploy1001> Synchronized php-1.34.0-wmf.3/extensions/MobileFrontend/resources/dist/mobile.editor.overlay.js: Hot-deploy T222229 to fix VE switching on MobileFrontend (duration: 00m 52s) [production]
21:21 <thcipriani> gerrit back [production]