Re: [Ticket#2024122303029917] Server - Disk Failure - AX52 #2405763 (116.202.37.55, 2a01:4f8:231:2e04::/64)
by Support - Hetzner Online GmbH
Sehr geehrter Herr Freyer,
die NVMe wurde soeben getauscht und der Server wieder gestartet.
Mit freundlichen Grüßen / Kind regards
Jörg Schreiner
Hetzner Online GmbH
08223 Falkenstein / Germany
Tel: +49 3745 744 47 100
Fax: +49 3745 744 47 1050
www.hetzner.com
Register Court: Registergericht Ansbach, HRB 6089
CEO: Martin Hetzner, Stephan Konvickova, Günther Müller
For the purposes of this communication, we may save some
of your personal data. For information on our data privacy
policy, please see: https://www.hetzner.com/de/privacy-policy-notice
23.12.2024 22:53 - topf(a)zapf.in schrieb:
>
>
> ########################################
>
> No preferred date!
> Defective drive(s): unknown
> Functional drive(s): S676NU0W575753
>
> ########################################
>
> Die SSD ist leider defekt, bitte um Tausch.
>
> SMART log:
> dmesg-Auszug:
> [16768679.987787] nvme nvme1: I/O tag 825 (a339) opcode 0x0 (I/O Cmd) QID 15
> timeout, aborting req_op:FLUSH(2) size:0
> [16768679.988377] nvme nvme1: Abort status: 0x0
> [16768680.627786] nvme nvme1: I/O tag 378 (617a) opcode 0x0 (I/O Cmd) QID 16
> timeout, aborting req_op:FLUSH(2) size:0
> [16768680.628366] nvme nvme1: Abort status: 0x0
> [16768710.191954] nvme nvme1: I/O tag 825 (b339) opcode 0x0 (I/O Cmd) QID 15
> timeout, aborting req_op:FLUSH(2) size:0
> [16768710.192539] nvme nvme1: Abort status: 0x0
> [16768710.639953] nvme nvme1: I/O tag 378 (717a) opcode 0x0 (I/O Cmd) QID 16
> timeout, aborting req_op:FLUSH(2) size:0
> [16768710.640519] nvme nvme1: Abort status: 0x0
> [16768713.842980] nvme nvme1: I/O tag 580 (f244) opcode 0x2 (I/O Cmd) QID 2
> timeout, aborting req_op:READ(0) size:12288
> [16768713.843384] nvme nvme1: I/O tag 494 (c1ee) opcode 0x2 (I/O Cmd) QID 5
> timeout, aborting req_op:READ(0) size:73728
> [16768713.843489] nvme nvme1: Abort status: 0x0
> [16768713.843693] nvme nvme1: I/O tag 555 (f22b) opcode 0x2 (I/O Cmd) QID 7
> timeout, aborting req_op:READ(0) size:8192
> [16768713.844007] nvme nvme1: Abort status: 0x0
> [16768713.844301] nvme nvme1: I/O tag 244 (30f4) opcode 0x2 (I/O Cmd) QID 9
> timeout, aborting req_op:READ(0) size:8192
> [16768713.844590] nvme nvme1: Abort status: 0x0
> [16768713.844891] nvme nvme1: I/O tag 819 (c333) opcode 0x2 (I/O Cmd) QID 11
> timeout, aborting req_op:READ(0) size:45056
> [16768713.845186] nvme nvme1: Abort status: 0x0
> [16768713.845483] nvme nvme1: I/O tag 494 (31ee) opcode 0x2 (I/O Cmd) QID 14
> timeout, aborting req_op:READ(0) size:12288
> [16768713.845826] nvme nvme1: Abort status: 0x0
> [16768713.846350] nvme nvme1: I/O tag 495 (c1ef) opcode 0x2 (I/O Cmd) QID 14
> timeout, aborting req_op:READ(0) size:12288
> [16768713.846535] nvme nvme1: Abort status: 0x0
> [16768713.846849] nvme nvme1: I/O tag 496 (21f0) opcode 0x2 (I/O Cmd) QID 14
> timeout, aborting req_op:READ(0) size:12288
> [16768713.847551] nvme nvme1: I/O tag 497 (c1f1) opcode 0x2 (I/O Cmd) QID 14
> timeout, aborting req_op:READ(0) size:12288
> [16768713.847553] nvme nvme1: Abort status: 0x0
> [16768713.848299] nvme nvme1: Abort status: 0x0
> [16768713.848612] nvme nvme1: Abort status: 0x0
> [16768740.399126] nvme nvme1: I/O tag 825 (c339) opcode 0x0 (I/O Cmd) QID 15
> timeout, aborting req_op:FLUSH(2) size:0
> [16768740.399657] nvme nvme1: Abort status: 0x0
> [16768740.648127] nvme nvme1: I/O tag 378 (817a) opcode 0x0 (I/O Cmd) QID 16
> timeout, aborting req_op:FLUSH(2) size:0
> [16768740.648603] nvme nvme1: Abort status: 0x0
> [16768743.860139] nvme nvme1: I/O tag 580 (0244) opcode 0x2 (I/O Cmd) QID 2
> timeout, aborting req_op:READ(0) size:12288
> [16768743.860548] nvme nvme1: I/O tag 494 (c1ee) opcode 0x2 (I/O Cmd) QID 5
> timeout, reset controller
> [16768743.860651] nvme nvme1: Abort status: 0x0
> [16768792.617438] INFO: task txg_sync:537 blocked for more than 122 seconds.
> [16768792.617763] Tainted: P O 6.8.4-2-pve #1
> [16768792.618026] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
> message.
> [16768792.618284] task:txg_sync state:D stack:0 pid:537 tgid:537
> ppid:2 flags:0x00004000
> [16768792.618547] Call Trace:
> [16768792.618799] <TASK>
> [16768792.619037] __schedule+0x401/0x15e0
> [16768792.619272] ? ttwu_queue_wakelist+0x101/0x110
> [16768792.619506] ? srso_alias_return_thunk+0x5/0xfbef5
> [16768792.619736] ? try_to_wake_up+0x248/0x5f0
> [16768792.619960] schedule+0x33/0x110
> [16768792.620179] cv_wait_common+0x109/0x140 [spl]
> [16768792.620399] ? __pfx_autoremove_wake_function+0x10/0x10
> [16768792.620619] __cv_wait+0x15/0x30 [spl]
> [16768792.620831] zil_sync+0xdd/0x580 [zfs]
> [16768792.621111] ? spa_taskq_dispatch_ent+0x66/0xe0 [zfs]
> [16768792.621381] ? srso_alias_return_thunk+0x5/0xfbef5
> [16768792.621594] ? zio_issue_async+0x53/0xb0 [zfs]
> [16768792.621855] ? srso_alias_return_thunk+0x5/0xfbef5
> [16768792.622048] ? zio_nowait+0xd2/0x1c0 [zfs]
> [16768792.622303] dmu_objset_sync+0x441/0x600 [zfs]
> [16768792.622564] dsl_dataset_sync+0x61/0x200 [zfs]
> [16768792.622816] dsl_pool_sync+0xb2/0x4e0 [zfs]
> [16768792.623065] spa_sync+0x578/0x1030 [zfs]
> [16768792.623319] ? srso_alias_return_thunk+0x5/0xfbef5
> [16768792.623500] ? srso_alias_return_thunk+0x5/0xfbef5
> [16768792.623673] ? spa_txg_history_init_io+0x120/0x130 [zfs]
> [16768792.623917] txg_sync_thread+0x1fd/0x390 [zfs]
> [16768792.624150] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
> [16768792.624374] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
> [16768792.624546] thread_generic_wrapper+0x5c/0x70 [spl]
> [16768792.624715] kthread+0xef/0x120
> [...