1551-1600 of 10000 results (58ms)
2018-06-04 §
09:39 <marostegui> Reload haproxy on dbproxy1010 to depool labsdb1010 - https://phabricator.wikimedia.org/T190704 [production]
09:34 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1097:3315 after alter table (duration: 00m 49s) [production]
09:04 <addshore> addshore@terbium:~$ for i in {1..2500}; do echo Lexeme:L$i; done | mwscript purgePage.php --wiki wikidatawiki [production]
08:56 <marostegui> Deploy schema change on db1097:3315 - T191316 T192926 T89737 T195193 [production]
08:56 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1097:3315 for alter table (duration: 00m 49s) [production]
08:53 <jynus@deploy1001> Synchronized wmf-config/db-codfw.php: Depool pc2005 (duration: 00m 50s) [production]
08:10 <jynus> restarting icinga due to ongoing check/downtime issues [production]
07:57 <marostegui> Stop replication on db2094:3315 for testing [production]
07:29 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1082 after alter table (duration: 00m 51s) [production]
07:11 <gehel> starting elasticsearch cluster restart on eqiad - T193734 [production]
06:18 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Repool db2059, db2075 - T190704 (duration: 00m 49s) [production]
06:05 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1121 - T190704 (duration: 00m 49s) [production]
05:52 <marostegui> Stop replication in sync on db1121 and db2051 - T190704 [production]
05:50 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1121 - T190704 (duration: 00m 49s) [production]
05:29 <marostegui> Deploy schema change on db1082 with replication (this will generate lag on labs for s5) - T191316 T192926 T89737 T195193 [production]
05:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1082 for alter table (duration: 00m 53s) [production]
02:53 <l10nupdate@deploy1001> ResourceLoader cache refresh completed at Mon Jun 4 02:53:16 UTC 2018 (duration 10m 14s) [production]
02:43 <l10nupdate@deploy1001> scap sync-l10n completed (1.32.0-wmf.6) (duration: 14m 33s) [production]
2018-06-03 §
10:19 <zhuyifei1999_> Grid is full. qdel'ed all jobs belonging to tools.dibot except lighttpd, and tools.mbh that has a job name starting 'comm_delin', 'delfilexcl' T195834 [tools]
02:18 <andrewbogott> rebooting labservices1001; it seems to have crashed [production]
2018-06-02 §
21:20 <legoktm> legoktm@integration-slave-docker-1003:~$ sudo docker rmi $(sudo docker images -q) [releng]
19:57 <greg-g> gjg@integration-slave-docker-1003:/srv/jenkins-workspace/workspace$ sudo rm -rf * [releng]
19:31 <Krenair> restarted parsoid on deployment-parsoid09 to try to fix stuff [releng]
18:07 <Krinkle> Beta Cluster's RESTBase or Parsoid is broken. Saving VE times out, logstash-beta contain restbase: "internal_http_error" / "Error: ESOCKETTIMEDOUT" [releng]
07:36 <legoktm@deploy1001> Synchronized php-1.32.0-wmf.6/skins/MonoBook/: Temporarily revert responsive MonoBook (T195625) (duration: 00m 58s) [production]
01:16 <legoktm> running docker-pkg in a screen because my connection is super flaky [releng]
2018-06-01 §
23:02 <bd808> Usurped empty project for T193964. Added Chico Venancio, Cicalese, Gergő Tisza, and MarkAHershberger as admins [matrix]
22:55 <bd808> Added BryanDavis (self) as projectadmin [matrix]
22:17 <Reedy> https://gerrit.wikimedia.org/r/#/c/436902/ finished deploying [releng]
21:37 <Krinkle> Re-create performance-beta.wmflabs.org webproxy (wired to webperf01) - T195314 [releng]
21:29 <Krinkle> Re-creating webperf01 in deploymet-prep, T195314 [releng]
20:57 <legoktm> deploying docker-pkg with https://gerrit.wikimedia.org/r/436859 for reals this time (again) [releng]
20:51 <legoktm> deleting old versions of docker images [releng]
20:48 <hashar> contint1001: deleting some old wikimedia/mediawiki-services-mathoid docker images [releng]
20:40 <mutante> contint1001 - mkdir /srv/zuul-debug-logs ; mv debug.log.2018-05-* from /var/log/zuul/ over there to free up disk space on / VG [releng]
20:26 <mutante> contint1001 - apt-get clean got a little bit more disk space [releng]
20:07 <legoktm> really Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/436852 [releng]
20:02 <Reedy> Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/436852 [releng]
19:21 <ebernhar1son> enable query phase slow logging and increase thresholds for fetch phase slow logging for content/general indices on eqiad and codfw elasticsearch clusters [production]
19:14 <mutante> zh.planet - fixed issue with corrupt state file and permissions - updated and using new design as well now [production]
17:34 <mutante> deployment.eqiad/codfw DNS names switched from tin to deploy1001 [production]
17:06 <thcipriani@deploy1001> Synchronized README: noop test of new deployment server (duration: 00m 53s) [production]
16:39 <mutante> deploy2001 - also fixing file permissions. files owned by 996 -> mwdeploy, files owned by 997 -> trebuchet [production]
16:21 <mutante> deployment server has switched away from tin to deploy1001. set global scap lock on deploy1001, re-enabled puppet and ran puppet, disabled tin as deployment server (T175288) [production]
16:13 <herron> enabled new logstash tcp input with TLS enabled for syslogs on port 16514 T193766 [production]
15:51 <gehel> elasticsearch cluster restart on codfw completed - T193734 [production]
15:47 <mutante> @deploy1001:/srv/deployment# find . -uid 997 -exec chown trebuchet {} \; [production]
15:41 <mutante> root@deploy1001:/srv/mediawiki-staging# find . -uid 996 -exec chown mwdeploy {} \; [production]
15:17 <mutante> [deploy1001:~] $ scap pull-master tin.eqiad.wmnet [production]
15:12 <mutante> tin umask 022 && echo 'switching deploy servers' > /var/lock/scap-global-lock [production]