1801-1850 of 4807 results (17ms)
2020-10-01 §
17:32 <elukey> remove + re-create /srv/deployment/analytics/refinery on stat1007 (perm issues after reimage) [analytics]
17:18 <fdans> deploying refinery [analytics]
14:51 <elukey> bootstrap an-worker109[8-9] as hadoop workers (with GPU) [analytics]
13:35 <elukey> bootstrap an-worker1097 (GPU node) as hadoop worker [analytics]
13:15 <elukey> restart performance-asoranking on stat1007 [analytics]
13:15 <elukey> execute "sudo chown analytics-privatedata:analytics-privatedata-users /srv/published-datasets/performance/autonomoussystems/*" on stat1007 to fix a perm issue after reimage [analytics]
10:30 <elukey> add an-worker1103 to the hadoop cluster [analytics]
07:15 <elukey> restart hdfs namenodes on an-master100[1,2] to pick up new hadoop workers settings [analytics]
06:04 <elukey> execyte "sudo chown -R analytics-privatedata:analytics-privatedata-users /srv/geoip/archive" on stat1007 - T264152 [analytics]
05:58 <elukey> execute "sudo -u hdfs kerberos-run-command hdfs hdfs dfs -chown -R analytics-privatedata /wmf/data/archive/geoip" - T264152 [analytics]
2020-09-30 §
07:29 <elukey> execute "alter table superset_production.alerts drop key ix_alerts_active;" on db1108's analytics-meta instance to fix replication after Superset upgrade - T262162 [analytics]
07:04 <elukey> superset upgraded to 0.37.2 on analytics-tool1004 - T262162 [analytics]
05:47 <elukey> "PURGE BINARY LOGS BEFORE '2020-09-22 00:00:00';" on an-coord1001's mariadb - T264081 [analytics]
2020-09-28 §
18:37 <elukey> execute "PURGE BINARY LOGS BEFORE '2020-09-20 00:00:00';" on an-coord1001's mariadb as attempt to recover space [analytics]
18:37 <elukey> execute "PURGE BINARY LOGS BEFORE '2020-09-15 00:00:00';" on an-coord1001's mariadb as attempt to recover space [analytics]
15:09 <elukey> execute set global max_connections=200 on an-coord1001's mariadb (hue reporting too many conns, but in reality the fault is from superset) [analytics]
10:02 <elukey> force /srv/jupyterhub/deploy/create_virtual_env.sh on stat1007 after the reimage [analytics]
07:58 <elukey> starting the process to decom the old hadoop test cluster [analytics]
2020-09-27 §
06:53 <elukey> manually ran /usr/bin/find /srv/backup/hadoop/namenode -mtime +14 -delete on an-master1002 to free space on the /srv partition [analytics]
2020-09-25 §
16:25 <elukey> systemctl reset-failed monitor_refine_eventlogging_legacy_failure_flags.service on an-launcher1002 to clear alerts [analytics]
15:52 <elukey> restart hdfs namenodes to correct rack settings of the new host [analytics]
15:42 <elukey> add an-worker1096 (GPU worker) to the hadoop cluster [analytics]
08:57 <elukey> restart daemons on analytics1052 (journalnode) to verify new TLS setting simplification (no truststore config in ssl-server.xml, not needed) [analytics]
07:18 <elukey> restart datanode on analytics1044 after new datanode partition settings (one partition was missing, caught by https://gerrit.wikimedia.org/r/c/operations/puppet/+/629647) [analytics]
2020-09-24 §
13:24 <elukey> moved the hadoop cluster to puppet TLS certificates [analytics]
13:20 <elukey> re-enable timers on an-launcher1002 after maintenance [analytics]
09:51 <elukey> stop all timers on an-launcher1002 to ease maintenance [analytics]
09:41 <elukey> force re-creation of jupyterhub's default venv on stat1006 after reimage [analytics]
07:29 <klausman> Starting reimaging of stat1006 [analytics]
06:48 <elukey> on an-launcher1002: sudo -u hdfs kerberos-run-command hdfs hdfs dfs -rm -r -skipTrash /var/log/hadoop-yarn/apps/mirrys/logs/* [analytics]
06:45 <elukey> on an-launcher1002: sudo -u hdfs kerberos-run-command hdfs hdfs dfs -rm -r -skipTrash /var/log/hadoop-yarn/apps/analytics-privatedata/logs/* [analytics]
06:39 <elukey> manually ran "/usr/bin/find /srv/backup/hadoop/namenode -mtime +15 -delete" on an-master1002 to free some space in the backup partition [analytics]
2020-09-23 §
07:29 <elukey> re-enable timers on al-launcher1002 - maintenance postponed [analytics]
06:06 <elukey> stop timers on an-launcher1002 as prep step before maintenance [analytics]
2020-09-22 §
06:29 <elukey> re-run webrequest-load-text 21/09T21 - failed due to sporadic hive/kerberos issue (SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://an-coord1001.eqiad.wmnet:10000/default;principal=hive/an-coord1001.eqiad.wmnet@WIKIMEDIA: Peer indicated failure: Failure to initialize security context) [analytics]
2020-09-21 §
18:00 <elukey> execute sudo -u hdfs kerberos-run-command hdfs hdfs dfs -rm -r -skipTrash /var/log/hadoop-yarn/apps/mgerlach/logs/* to free ~30TB of space on HDFS (Replicated) [analytics]
17:44 <elukey> restart yarn resource managers on an-master100[1,2] to pick up settings for https://gerrit.wikimedia.org/r/c/operations/puppet/+/628887 [analytics]
16:59 <joal> Manually add _SUCCESS file to events to hourly-partition of page_move events so that wikidata-item_page_link job starts [analytics]
16:21 <joal> Kill restart wikidata-item_page_link-weekly-coord to not wait on missing data [analytics]
15:45 <joal> Restart wikidata-json_entity-weekly coordinator after wrong kill in new hue UI [analytics]
15:42 <joal> manually killing wikidata-json_entity-weekly-wf-2020-08-31 - Raw data is missing from dumps folder (json dumps) [analytics]
2020-09-18 §
15:05 <elukey> systemctl reset-failed monitor_refine_eventlogging_legacy_failure_flags.service on an-launcher1002 to clear icinga alrms [analytics]
10:38 <elukey> force ./create_virtualenv.sh in /srv/jupyterhub/deploy to update the jupyter's default venv [analytics]
2020-09-17 §
10:12 <klausman> started backup of stat1004's /srv to stat1008 [analytics]
2020-09-16 §
19:12 <joal> Manually kill webrequest-hour oozie job that started before the restart could happen (waiting for previous hour to be finished) [analytics]
19:00 <joal> Kill-restart data-quality-hourly bundle after deploy [analytics]
18:57 <joal> Kill-restart webrequest after deploy [analytics]
18:44 <joal> Kill restart mediawiki-history-reduced job after deploy [analytics]
17:59 <joal> Deploy refinery onto HDFS [analytics]
17:46 <joal> Deploy refinery using scap [analytics]