колличество дисков на ноде ?
А там наверное написано наверняка чет типо 1+1
Oct 15, 2023, 6:30:00 PM [WRN] overall HEALTH_WARN 1 MDSs report slow metadata IOs; mons a,b are low on available space; Reduced data availability: 3 pgs inactive; 14 mgr modules have recently crashed; OSD count 0 < osd_pool_default_size 3 Oct 15, 2023, 6:20:00 PM [WRN] [WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3 Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7cc878c45d-wwg4n at 2023-10-15T16:14:50.887304Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7cc878c45d-wwg4n at 2023-10-15T16:15:05.891890Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7cc878c45d-wwg4n at 2023-10-15T16:14:52.247634Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7cc878c45d-wwg4n at 2023-10-15T16:15:20.643643Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7cc878c45d-wwg4n at 2023-10-15T15:26:19.752201Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7fb868897c-b8r7j at 2023-10-14T22:08:57.221255Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-54449b485b-x4ktt at 2023-10-15T13:34:43.688582Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7fb868897c-b8r7j at 2023-10-14T23:25:19.031776Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7fb868897c-b8r7j at 2023-10-14T23:25:17.976378Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7fb868897c-b8r7j at 2023-10-14T23:25:09.924703Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7fb868897c-b8r7j at 2023-10-14T23:25:08.414788Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7fb868897c-b8r7j at 2023-10-14T22:10:07.274748Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7fb868897c-b8r7j at 2023-10-14T22:10:06.146819Z Oct 15, 2023, 6:20:00 PM [WRN] mgr module nfs crashed in daemon mgr.a on host rook-ceph-mgr-a-7fb868897c-b8r7j at 2023-10-14T22:10:02.967366Z Oct 15, 2023, 6:20:00 PM [WRN] [WRN] RECENT_MGR_MODULE_CRASH: 14 mgr modules have recently crashed Oct 15, 2023, 6:20:00 PM [WRN] pg 3.0 is stuck inactive for 38m, current state unknown, last acting [] Oct 15, 2023, 6:20:00 PM [WRN] pg 2.0 is stuck inactive for 39m, current state unknown, last acting [] Oct 15, 2023, 6:20:00 PM [WRN] pg 1.0 is stuck inactive for 44m, current state unknown, last acting [] Oct 15, 2023, 6:20:00 PM [WRN] [WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive Oct 15, 2023, 6:20:00 PM [WRN] mon.b has 16% avail Oct 15, 2023, 6:20:00 PM [WRN] mon.a has 27% avail Oct 15, 2023, 6:20:00 PM [WRN] [WRN] MON_DISK_LOW: mons a,b are low on available space Oct 15, 2023, 6:20:00 PM [WRN] mds.myfs-a(mds.0): 31 slow metadata IOs are blocked > 30 secs, oldest blocked for 2327 secs Oct 15, 2023, 6:20:00 PM [WRN] [WRN] MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs Oct 15, 2023, 6:20:00 PM [WRN] Health detail: HEALTH_WARN 1 MDSs report slow metadata IOs; mons a,b are low on available space; Reduced data availability: 3 pgs inactive; 14 mgr modules have recently crashed; OSD count 0 < osd_pool_default_size 3 Oct 15, 2023, 6:18:30 PM [WRN] Health check update: 14 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH) Oct 15, 2023, 6:17:59 PM [WRN] Health check update: 13 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH) Oct 15, 2023, 6:17:28 PM [WRN] Health check update: 12 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)
Я искренне надеюсь что вы с такими вопросами пытаетесь цеф развернуть для учебной лабы, а не для эксплуатации.
В шифте один
Совершенно верно,домашняя тестовая лаба
Спасибо, успокоили)
Обсуждают сегодня