3501-3550 of 10000 results (66ms)
2019-02-02 §
20:42 <chaomodus> restarted pdfrender on scb1003 [production]
20:41 <chaomodus> restarted pdfrender on scb1004 [production]
20:06 <chaomodus> parsoid was failed on scandium and alerting, the service parsoid-vd was restarted and appears to have come back [production]
05:44 <jforrester@deploy1001> Synchronized php-1.33.0-wmf.14/extensions/VisualEditor/lib/ve/src/ui/dialogs/ve.ui.FindAndReplaceDialog.js: b/src/ui/dialogs/ve.ui.FindAndReplaceDialog.js T214963 Hot-deploy VE fix to stop hitting user pref writes without debounce (duration: 01m 02s) [production]
2019-02-01 §
23:16 <vgutierrez> restart pdfrender on scb1004 [production]
21:57 <ejegg> updated payments-wiki-staging from 7767c7027e to 52a271e681 [production]
21:25 <ejegg> updated payments-wiki-staging to fundraising/REL1_31 branch [production]
07:13 <bawolff_> reset 2FA on wikitech for [[User:Cicalese]] [production]
2019-01-31 §
17:44 <jynus> running alter table on metawiki.revision_actor_temp, trying to fix TokuDB horrible bugs [production]
15:54 <jynus> stop, upgrade and restart db1117 [production]
13:34 <mvolz@deploy1001> scap-helm zotero finished [production]
13:34 <mvolz@deploy1001> scap-helm zotero cluster codfw completed [production]
13:34 <mvolz@deploy1001> scap-helm zotero upgrade production -f zotero-values-codfw.yaml stable/zotero [namespace: zotero, clusters: codfw] [production]
13:31 <mvolz@deploy1001> scap-helm zotero finished [production]
13:31 <mvolz@deploy1001> scap-helm zotero cluster eqiad completed [production]
13:31 <mvolz@deploy1001> scap-helm zotero upgrade production -f zotero-values-eqiad.yaml stable/zotero [namespace: zotero, clusters: eqiad] [production]
13:19 <mvolz@deploy1001> scap-helm zotero finished [production]
13:19 <mvolz@deploy1001> scap-helm zotero cluster staging completed [production]
13:19 <mvolz@deploy1001> scap-helm zotero upgrade staging -f zotero-values-staging.yaml --version=0.0.1 stable/zotero [namespace: zotero, clusters: staging] [production]
13:18 <mvolz@deploy1001> scap-helm zotero upgrade staging -f zotero-values-staging.yaml stable/zotero [namespace: zotero, clusters: staging] [production]
12:54 <jynus> stop, upgrade and restart db2044 [production]
12:12 <jynus> apply new grants to m5-master with replication T214740 [production]
11:30 <arturo> T215012 icinga downtime cloudvirt1015 for 4h while investigating issues [production]
11:24 <arturo> T215012 reboot cloudvirt1015 [production]
11:24 <jynus> restart eventstreams on scb1002,3,4 [production]
11:22 <jynus> restart eventstreams on scb1001 [production]
10:22 <jynus> resetting to defaults innodb consistency options for db2048 T188327 [production]
10:00 <jynus> restarting pdfrender on scb1002,3,4 [production]
09:54 <jynus> restarting pdfrender on scb1001 [production]
02:01 <gtirloni> T215004 restarted gerrit (using 1200% cpu, 71% mem) [production]
2019-01-30 §
20:28 <bawolff_> reset 2FA@wikitech for [[User:deigo]] [production]
18:25 <ladsgroup@deploy1001> Finished deploy [ores/deploy@ad160b0]: (no justification provided) (duration: 12m 46s) [production]
18:12 <ladsgroup@deploy1001> Started deploy [ores/deploy@ad160b0]: (no justification provided) [production]
18:03 <jynus> reducing innodb consistency options for db2048 T188327 [production]
17:36 <XioNoX> deactivate/activate cr2-esams:xe-0/1/3 [production]
17:28 <akosiaris> restart pdfrender on scb1003, scb1004 [production]
16:19 <akosiaris> restart proton on proton1002 [production]
15:52 <jynus> stop, upgrade and restart db2037 [production]
15:24 <jynus> stop, upgrade and restart db2042 [production]
14:27 <jynus> stop, upgrade and restart db2034, this will cause some lag on x1-codfw [production]
13:53 <jynus> stop, upgrade and restart db2069 [production]
11:20 <jynus> stop, upgrade and restart db2045, this will cause some lag on s8-codfw [production]
10:54 <jynus> stop, upgrade and restart db2079 [production]
10:33 <jynus> stop, upgrade and restart db2039, this will cause some lag on s6-codfw [production]
10:03 <jynus> stop, upgrade and restart db2052, this will cause some lag on s5-codfw [production]
09:31 <jynus> stop, upgrade and restart db2089 (s5/s6) [production]
08:58 <jynus> stop, upgrade and restart db2051, this will cause some lag on s4-codfw [production]
08:44 <jynus> stop, upgrade and restart db2090 [production]
2019-01-29 §
21:52 <jijiki> Depooling thumbor2002 due to disc failure - T214813 [production]
16:51 <arturo> T214499 update Netbox status for cloudvirt1023/1024/1025/1026/1027 from PLANNED to ACTIVE. These servers are actually providing services already. [production]