2014-03-21
§
|
14:08 |
<hashar> |
Jenkins: label labs slaves with hasJenkinsDebianGlue to build debian packages on them |
[production] |
13:43 |
<hashar> |
Jenkins: installing jenkins-debian-glue and misc::package-builder on labs slaves. |
[production] |
12:02 |
<akosiaris> |
upgrade jenkins-debian-glue to 0.8.1 on apt.wikimedia.org |
[production] |
11:39 |
<akosiaris> |
upgraded libmemcached packages on apt.wikimedia.org to libmemcached_1.0.17-1~wmf+precise2 |
[production] |
10:01 |
<hashar> |
Jenkins: deleting pmtpa labs slaves integration-slave02 and integration-slave03. Replaced by eqiad instances integration-slave1001 and integration-slave1002. |
[production] |
09:29 |
<hashar> |
l10ncache is now rebuild properly : https://integration.wikimedia.org/ci/job/beta-code-update/53508/console |
[releng] |
09:23 |
<hashar> |
fixing l10ncache on deplkoyment-bastion : <tt>chown -R l10nupdate:l10nupdate /data/project/apache/common-local/php-master/cache/l10n </tt> The l10nupdate UID/GID has been changed and are now in LDAP |
[releng] |
03:05 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Fri Mar 21 03:05:27 UTC 2014 (duration 5m 26s) |
[production] |
02:34 |
<LocalisationUpdate> |
completed (1.23wmf19) at 2014-03-21 02:34:39+00:00 |
[production] |
02:12 |
<LocalisationUpdate> |
completed (1.23wmf18) at 2014-03-21 02:12:39+00:00 |
[production] |
00:49 |
<ori> |
synchronized wmf-config 'Ib539f96eb7: Increase the network performance sampling rate for MediaViewer' |
[production] |
00:48 |
<ori> |
updated /a/common to {{Gerrit|Ib0eb802c4}}: Fix typo in I86f5493d0 |
[production] |
00:38 |
<hoo> |
synchronized wmf-config/ 'Fix typo <> udp, also Icdb5425 and I04e5f7f which weren't synced but look harmless' |
[production] |
2014-03-20
§
|
23:46 |
<bd808> |
Mounted secondary disk as /var/lib/elasticsearch on deployment-logstash1 |
[releng] |
23:46 |
<bd808> |
Converted deployment-tin to use local puppet & salt masters |
[releng] |
23:38 |
<catrope> |
synchronized php-1.23wmf18/resources/mediawiki/mediawiki.inspect.js |
[production] |
23:22 |
<catrope> |
synchronized wmf-config/CommonSettings.php |
[production] |
23:22 |
<catrope> |
synchronized wmf-config/InitialiseSettings.php |
[production] |
23:22 |
<catrope> |
synchronized docroot/noc/createTxtFileSymlinks.sh |
[production] |
23:22 |
<catrope> |
updated /a/common to {{Gerrit|Ia08c65d40}}: Enable Flow on Hovercards Beta Features |
[production] |
22:09 |
<hashar> |
Migrated videoscaler01 to use self salt/puppet masters. |
[releng] |
21:43 |
<ori> |
synchronized wmf-config/InitialiseSettings.php 'If51eda243: Follow up Id6222f4db to amend sort order in feed URL' |
[production] |
21:43 |
<ori> |
updated /a/common to {{Gerrit|If51eda243}}: Follow up Id6222f4db to amend sort order in feed URL |
[production] |
21:30 |
<hashar> |
manually installing timidity-daemon on jobrunner01.eqiad so puppet can stop it and stop whining |
[releng] |
21:20 |
<ori> |
synchronized wmf-config/InitialiseSettings.php 'Id6222f4d: Add RSS of Bugzilla query of open HHVM bugs to mediawikiwiki's whitelist' |
[production] |
21:20 |
<ori> |
updated /a/common to {{Gerrit|Id6222f4db}}: Add RSS of Bugzilla query of open HHVM bugs to mediawikiwiki's whitelist |
[production] |
21:06 |
<reedy> |
synchronized docroot and w |
[production] |
21:03 |
<reedy> |
updated /a/common to {{Gerrit|Iaa99d2162}}: Add Wikibase repoSiteName setting for client |
[production] |
21:00 |
<hashar> |
migrate jobrunner01.eqiad.wmflabs to self puppet/salt masters |
[releng] |
20:56 |
<bd808> |
Updated scholarships.wikimedia.org to cb2ef4c (fix for bug 62464) |
[production] |
20:55 |
<hashar> |
deleting deployment-jobrunner02 , lets start with a single instance for nwo |
[releng] |
20:51 |
<hashar> |
Creating deployment-jobrunner01 and 02 in eqiad. |
[releng] |
20:14 |
<mutante> |
DNS update - remove ssl1-4 |
[production] |
20:08 |
<mutante> |
DNS update - remove sq67-70, former varnish testing |
[production] |
19:35 |
<akosiaris> |
created a 50G LV for /var/log on zirconium, stopped all services, moved data to it, mounted it and restarted all services |
[production] |
19:29 |
<reedy> |
synchronized wmf-config/ 'Wikibase config updates' |
[production] |
19:23 |
<reedy> |
Finished scap: Rebuild 1.23wmf19 l10n cache for wikibase (duration: 12m 01s) |
[production] |
19:10 |
<reedy> |
Started scap: Rebuild 1.23wmf19 l10n cache for wikibase |
[production] |
19:08 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: group0 wikis to 1.23wmf19 |
[production] |
19:02 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: Wikipedias to 1.23wmf18 |
[production] |
19:00 |
<mutante> |
disk full on zirconium - gzipping some etherpadlite.sql dump i found |
[production] |
17:41 |
<reedy> |
synchronized php-1.23wmf19 'Update Wikidata and WikimediaMessages' |
[production] |
17:06 |
<Krinkle> |
Reloading Zuul to deploy Ie800ed90b51c47d5a1 |
[production] |
16:58 |
<mutante> |
repooling mw1163 (it's back in dsh as well) |
[production] |
16:38 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: testwiki back to 1.23wmf18 till window |
[production] |
16:05 |
<bd808> |
Ran /usr/local/bin/sync-common && /usr/local/bin/scap-rebuild-cdbs on mw1163. Should not repool until it's back in the dsh group. Should me manually synced just before repooling. |
[production] |
15:47 |
<hashar> |
fixed salt-minion service on deployment-cache-upload01 and deployment-cache-mobile03 by deleting /etc/salt/pki/minion/minion_master.pub |
[releng] |
15:36 |
<reedy> |
Finished scap: testwiki to 1.23wmf19 and build l10n cache (duration: 15m 32s) |
[production] |
15:30 |
<hashar> |
migrated deployment-cache-upload01.eqiad.wmflabs and deployment-cache-mobile03.eqiad.wmflabs to use the salt/puppetmaster deployment-salt.eqiad.wmflabs. |
[releng] |
15:30 |
<hashar> |
deployment-cache-upload01.eqiad.wmflabs and deployment-cache-mobile03.eqiad.wmflabs recovered!! /dev/vdb does not exist on eqiad which caused the instance to be stalled. |
[releng] |