601-650 of 10000 results (25ms)
2016-05-02 §
16:20 <urandom> cherry-picking latest refs/changes/78/284078/11 onto deployment-puppetmaster : T126629 [releng]
16:01 <krenair@tin> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/286286/ (duration: 00m 26s) [production]
15:53 <krenair@tin> Synchronized wikiversions-labs.json: https://gerrit.wikimedia.org/r/#/c/283689/ (duration: 00m 25s) [production]
15:53 <krenair@tin> Synchronized dblists/all-labs.dblist: https://gerrit.wikimedia.org/r/#/c/283689/ (duration: 00m 26s) [production]
15:44 <krenair@tin> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/286287/ (duration: 00m 25s) [production]
15:40 <krenair@tin> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/286285/ (duration: 00m 25s) [production]
15:32 <krenair@tin> Synchronized php-1.27.0-wmf.22/extensions/Wikidata: https://gerrit.wikimedia.org/r/#/c/286434/2 (duration: 02m 02s) [production]
15:28 <bblack> re-pooling esams [production]
15:22 <jynus> restarting db1040 for reimage [production]
15:21 <krenair@tin> Synchronized php-1.27.0-wmf.22/extensions/Math/MathRestbaseInterface.php: https://gerrit.wikimedia.org/r/#/c/286412/ (duration: 00m 26s) [production]
15:07 <krenair@tin> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/285700/ (duration: 00m 42s) [production]
14:52 <moritzm> rolling restart of zookeeper to pick up Java update [production]
14:22 <bblack> starting gdnsd on esams (esams is marked down there) [production]
14:20 <bblack> stopped gdnsd on eeden [production]
13:13 <jynus> stopping db1040 mysql for backup before cloning [production]
12:15 <elukey> deployed Varnish change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage. [production]
12:14 <elukey> deployed Varnish change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage. [analytics]
12:13 <elukey> deployed Varnish cache::misc change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage. [production]
12:12 <elukey> Merged Varnish cache::misc change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage. [production]
12:05 <elukey> enabled maintenance banner to dashiki based dashboards via https://meta.wikimedia.org/wiki/Dashiki:OutOfService [analytics]
11:21 <elukey> deployed the last version of Event Logging from tin. Service also restarted. [production]
11:21 <elukey> deployed last version of Event Logging. Service also restarted. [analytics]
11:06 <moritzm> rolling restart of hhvm in eqiad for pcre security update [production]
10:42 <moritzm> rolling restart of hhvm in codfw for pcre security update [production]
09:58 <moritzm> uploaded openldap 2.4.41+wmf1 for jessie-wikimedia to carbon (T130593) [production]
09:54 <gehel> restart elasticsearch cluster to ensure multicast configuration is disabled (T110236) [deployment-prep]
09:44 <hashar> On zuul-merger instances (gallium / scandium), cleared out pywikibot/core working copy ( rm -fR /srv/ssd/zuul/git/pywikibot/core/ ) T134062 [releng]
08:14 <hashar> Restarted stuck Jenkins (due to IRC plugin) [production]
07:44 <moritzm> rebooting hasseleh/hassium for kernel upgrade to 4.4 [production]
07:10 <moritzm> installing poppler security updates [production]
06:46 <_joe_> rebooting serpens from ganeti, unreachable [production]
02:30 <l10nupdate@tin> ResourceLoader cache refresh completed at Mon May 2 02:30:33 UTC 2016 (duration 9m 18s) [production]
02:21 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.22) (duration: 09m 31s) [production]
2016-05-01 §
22:22 <bd808> Restarted to refresh session with phabricator; will file a bug to work on that [tools.stashbot]
20:51 <Luke081515> Updadateing repos & databases [rcm.cac]
19:37 <SMalyshev> enabled wdqs1002, put wdqs1001 in maintenance mode for reload [production]
16:20 <volans> changing live configuration of db1042 thread_pool_stall_limit to 10 to avoid connection timeout errors [production]
16:18 <volans> changing live configuration of db1042 thread_pool_stall_limit back to 100 to test impact on connection timeout [production]
16:08 <volans> changing live configuration of db1042 thread_pool_stall_limit to 10 to test impact on connection timout [production]
15:24 <jynus> alter table puppet.fact_values to a bigint unsigned for m1 T107753 [production]
15:07 <volans@tin> Synchronized wmf-config/db-eqiad.php: Depool db1040 for investigation T134114 (duration: 01m 22s) [production]
14:44 <volans> truncated puppet.fact_values table to fix puppet (as documented on wikitech) [production]
10:58 <godog> reboot furud.codfw.wmnet, ganeti instance with increasing load and 100% iowait, kvm/ganeti idle instance bug likely T134098 [production]
2016-04-30 §
18:31 <Amir1> deploying d4f63a3 from github.com/wiki-ai/ores-wikimedia-config into targets in beta cluster via scap3 [releng]
18:05 <Amir1> deployed d4f63a3 to web and worker nodes [ores]
17:49 <Amir1> deploy d4f63a3 to the staging [ores]
17:33 <Amir1> deploying 30ba552 to the staging [ores]
14:54 <Amir1> running puppet agent manually in ores-web-03 [ores]
14:52 <Amir1> added precaching role to ores-web-03 [ores]
13:42 <elukey> disabled puppet on analytics1047 and scheduled downtime for the host, IO errors in the dmesg for /dev/sdd. Stopped also Hadoop daemons to remove it from the cluster temporarily (not sure how to do it properly, will write docs). [analytics]