2001-2050 of 7884 results (18ms)
2017-06-21 §
09:38 <hashar> upgrading/Rebooting all instances from integration project to catch up with Linux kernel upgrades [releng]
2017-06-20 §
19:25 <hashar> Nodepool rate being bumped from 1 query per 6 seconds to 1 query per 5 seconds ( https://gerrit.wikimedia.org/r/#/c/358601/ ) [releng]
01:25 <thcipriani|afk> deployment-tin stuck on post-merge queue for the past 13 hours, unstuck now [releng]
2017-06-19 §
22:08 <thcipriani|afk> reloading zuul to deploy https://gerrit.wikimedia.org/r/#/c/360091/ [releng]
08:29 <hashar> Gerrit: added Ladsgroup to 'mediawiki' group - T165860 [releng]
2017-06-18 §
19:26 <Reedy> Re-enabled beta-update-databases-eqiad as wikidatawiki takes < 10 minutes T168036 T167981 [releng]
19:25 <Reedy> A lot of items on beta wikidatawiki deleted T168036 T167981 [releng]
2017-06-16 §
23:41 <Reedy_> also deleting a lot of Property:P* pages on beta wikidatawiki T168106 [releng]
22:55 <Reedy> deleting Q100000-Q200000 on beta wikidatawiki T168106 [releng]
19:04 <Reedy> disabled beta-update-databases-eqiad because it's not doing much useful atm [releng]
14:56 <zeljkof> Reloading Zuul to deploy 18a50a707eac0bcdd88f48f2321af78ee399a4eb [releng]
14:40 <hashar> integration-slave-jessie-1001 apt-get upgrade to downgrade python-pbr to 0.8.2 as pinned since T153877. /usr/bin/unattended-upgrade magically upgraded it for some reason [releng]
06:49 <Reedy> script upto `Processed up to page 336425 (Q235372)`... hopefully it's finished by morning [releng]
03:13 <Reedy> running `mwscript extensions/Wikibase/repo/maintenance/rebuildTermSqlIndex.php --wiki=wikidatawiki` in screen as root on deployment-tin for T168036 [releng]
03:10 <Reedy> running `mwscript extensions/Wikibase/repo/maintenance/rebuildEntityPerPage.php --wiki=wikidatawiki` in screen as root on deployment-tin for T168036 [releng]
02:23 <Reedy> cherry-picked https://gerrit.wikimedia.org/r/#/c/354932/ onto beta puppetmaster [releng]
2017-06-15 §
16:34 <RainbowSprinkles> deployment-prep: Disabled database updates for awhile, running it by hand [releng]
10:39 <hashar> apt-get upgrade on deployment-tin [releng]
00:52 <thcipriani> deployment-tin jenkins agent borked for 4 hours, should be fixed now [releng]
2017-06-14 §
12:24 <hashar> gerrit: marked mediawiki/skins/Donate has read-only ( https://gerrit.wikimedia.org/r/#/admin/projects/mediawiki/skins/Donate ) - T124519 [releng]
2017-06-13 §
22:05 <hashar> Zuul resarted manually from a terminal on contint1001. It does not have any statsd configuration so we will miss metrics for a bit till it is restarted properly. [releng]
21:13 <hashar> Gracefully restarting Zuul [releng]
20:37 <hashar> Restarting Nodepool. apparently confused in pool tracking and spawning to many Trusty nodes (7 instead of 4) [releng]
20:31 <hashar> Nodepool: deleted a bunch of Trusty instances. It scheduled lot of them that are taking slots in the pool. Better have jessie nodes to be spawned instead since there is high demand for them [releng]
20:19 <hashar> deployment-prep: added Polishdeveloper to the "importer" global group. https://deployment.wikimedia.beta.wmflabs.org/wiki/Special:GlobalUserRights/Polishdeveloper - T167823 [releng]
18:47 <andrewbogott> root@deployment-salt02:~# salt "*" cmd.run "apt-get -y install facter" [releng]
18:46 <andrewbogott> using salt to "apt-get -y install facter" on all deployment-prep instances [releng]
18:38 <andrewbogott> restarting apache2 on deployment-puppetmaster02 [releng]
18:37 <andrewbogott> doing a git fetch and rebase for deployment-puppetmaster02 [releng]
17:00 <elukey> hacking apache on mediawiki05 to test rewrite rules [releng]
16:04 <Amir1> cherry-picked 357985/4 on puppetmaster [releng]
15:59 <halfak> deployed ores-prod-deploy:862aea9 [releng]
13:47 <hashar> nodepool force running puppet for: lower min-ready for trusty [puppet] - https://gerrit.wikimedia.org/r/356466 [releng]
10:53 <elukey> rolling restart of all kafka brokers to pick up the new zookeper change (only deployment-zookeeper02 available) [releng]
10:36 <elukey> delete deployment-zookeeper01 (old trusty instance, replaced with a jessie one) [releng]
09:50 <elukey> big refactoring for zookeeper merged in operations/puppet - https://gerrit.wikimedia.org/r/#/c/354449 - ping the Analytics team for any issue [releng]
2017-06-12 §
14:22 <hashar> Image snapshot-ci-trusty-1497276913 in wmflabs-eqiad is ready [releng]
14:15 <hashar> Nodepool: regenerating Trusty images to confirm that removal of keystone admin_token is a noop for nodepool - T165211 [releng]
12:44 <hashar> Image snapshot-ci-jessie-1497270581 in wmflabs-eqiad is ready [releng]
12:30 <hashar> nodepool: refreshing Jessie snapshot to upgrade HHVM from 3.12 to 3.18 - T167493 T165074 [releng]
08:47 <hashar> deployment-prep : salt -v '*' cmd.run 'apt-get clean' [releng]
2017-06-09 §
20:30 <thcipriani> reloading zuul to deploy https://gerrit.wikimedia.org/r/#/c/358092/1 [releng]
18:50 <thcipriani> reloading zuul to deploy https://gerrit.wikimedia.org/r/#/c/358067/3 [releng]
2017-06-07 §
17:49 <elukey> forced /usr/local/bin/git-sync-upstream manually on puppetmaster02 [releng]
17:30 <elukey> manually fixed rebase issue for operations/puppet on puppetmaster02 (empty commit due to the change for scap3 and jobrunners) [releng]
09:33 <elukey> restart kafka brokers to pick up the new zookeeper settings [releng]
09:00 <elukey> adding deployment-zookeeper02.eqiad.wmflabs to Hiera:deployment-prep [releng]
08:43 <gehel> upgrading kibana to v5.3.3 on deployment-logstash2 [releng]
08:35 <gehel> rolling back to kibana 5.3.2, incompatible elasticsearch version [releng]
08:28 <gehel> upgrading kibana to v5.4.1 on deployment-logstash2 [releng]