6351-6400 of 10000 results (93ms)
2019-03-02 §
20:44 <hauskatze> Renamed https://github.com/wikimedia/wikimedia-github-community-health-defaults to https://github.com/wikimedia/.github [releng]
20:42 <hauskatze> ssh -p 29418 gerrit.wikimedia.org replication start wikimedia/github-community-health-defaults --wait [releng]
20:40 <hauskatze> github created https://github.com/wikimedia/wikimedia-github-community-health-defaults [releng]
20:31 <Reedy> reloading zuul to deploy https://gerrit.wikimedia.org/r/493881 [releng]
20:30 <Krinkle> Failure on integration-slave-docker-1021 (ENOMEM) https://integration.wikimedia.org/ci/job/fresnel-node10-browser-docker/61/console [releng]
19:51 <legoktm> deploying https://gerrit.wikimedia.org/r/493872 [releng]
19:37 <legoktm> deploying https://gerrit.wikimedia.org/r/493862 [releng]
19:31 <valhallasw`cloud> Cleaning up old (2017-2018) log files [tools.nlwikibots]
18:55 <framawiki> block spammer https://quarry.wmflabs.org/Twc93521 `INSERT INTO user_group (user_id, group_name) VALUES (3734, "blocked");` [quarry]
18:26 <Reedy> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/493808 [releng]
18:21 <Reedy> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/493837 [releng]
12:12 <gtirloni> labstore1006 started nfsd T217473 [production]
2019-03-01 §
20:45 <ejegg> turned off fundraising omnimail process unsubscribes job [production]
19:40 <XioNoX> pre-configure asw-a8 ports on asw2-a8-eqiad - T187960 [production]
19:32 <XioNoX> pre-configure asw-a7 ports on asw2-a7-eqiad - T187960 [production]
19:29 <XioNoX> pre-configure asw-a6 ports on asw2-a6-eqiad - T187960 [production]
19:17 <thcipriani> integration-slave-docker-1021:/# docker rmi $(docker images | grep " months " |grep -v " [1-2] months " | awk '{print $3}') [releng]
19:17 <XioNoX> pre-configure asw-a5 ports on asw2-a5-eqiad - T187960 [production]
18:53 <robh> notebook1003 has unusually high load recently (23) and seemed to lag in reporting to icinga. no hardware failures, pinged about it in #wikimedia-analytics [production]
17:02 <thcipriani> integration-slave-jessie-1004 back online [releng]
16:58 <thcipriani> integration-slave-jessie-1002 back online (disk space looked fine); rebooting integration-slave-jessie-1004 -- can't ssh to machine [releng]
16:33 <jbond42> rolling security update of bind9 packages on jessie and trusty [production]
16:11 <Lucas_WMDE> delete refs/master and refs/gerrit/master on WikibaseQualityConstraints repository T217408 [releng]
15:49 <hashar> wikidata/query/blazegraph change Gerrit config to require a change-id # T216855 [releng]
15:44 <AmandaNP> installed missing requests_oauthlib via pip [utrs]
15:43 <AmandaNP> installed missing "pip" [utrs]
15:38 <AmandaNP> wget tested on utrs-production2 to verify errors in apache log are clear. Everything looks good [utrs]
15:38 <ema> trafficserver_8.0.2-1wm1 uploaded to stretch-wikimedia [production]
15:24 <AmandaNP> reset db password for deltaquad due to inability to login with right password [utrs]
15:02 <akosiaris> restore proton config values [production]
14:57 <AmandaNP> rebooting utrs-database2 [utrs]
14:55 <AmandaNP> reinstalled python-mysqldb on utrs-database2 because it's coming up missing [utrs]
14:50 <andrewbogott> rebooting utrs-production2 to resolve nfs-mounting issues [utrs]
14:33 <hashar> Updating all debian-glue Jenkins job to properly take in account the BUILD_TIMEOUT parameter # T217403 [production]
14:28 <hashar> Upgrading integration/jenkins-job-builder to version 2.0.2 + one custom hack 11aa5de4...a06d173e # T143731 [releng]
14:18 <hashar> integration/jenkins-job-builder : importing upstream code to new branch "upstream". Push all upstream tags to our repository [releng]
13:24 <moritzm> removed sca* hosts from debmonitor database [production]
12:49 <akosiaris> lower max_render_queue_size: to 20 for proton on proton100{1,2} [production]
12:32 <akosiaris> restart proton1002, OOM showed up [production]
12:31 <akosiaris> restart proton on proton1001, counted 99 chromium processes left running since at least Jan 30 [production]
11:47 <jbond42> rebooting labsdb1005.codfw.wmnet [production]
11:17 <jbond42> rebooting labstore2004.codfw.wmnet [production]
11:05 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1094 (duration: 00m 50s) [production]
10:43 <arturo> T204502 shutdown lsg-01 instance (was active by mistake due to workload reallocation) Will be deleted soon anyway [design]
10:39 <arturo> shut down drmft instance again (was active by mistake due to workload reallocation) Will be deleted soon anyway [math]
08:52 <godog> temporarily stop prometheus instances on prometheus2004 to take a snapshot [production]
07:44 <oblivian@deploy1001> Synchronized README: Test deploy for new scap configuration (duration: 00m 48s) [production]
07:39 <oblivian@deploy1001> Synchronized README: noop sync to test opcache-manager (duration: 00m 47s) [production]
07:31 <oblivian@deploy1001> Synchronized README: Test deploy for new scap configuration (duration: 00m 46s) [production]
07:29 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1094 after mysql upgrade (duration: 00m 47s) [production]