Resultaten 1 tot 15 van de 39
Pagina 1 van de 3 1 2 3 LaatsteLaatste
  1. #1
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter

    VM's instabiel na migratie

    Hallo,

    Wij hebben een 4 node proxmox cluster.
    Als centrale opslag hebben wij een OmniOS ZFS SAN gevuld met SSD's.
    Deze is gemount via NFS op de proxmox servers en hier draaien de VM's op.

    Nu hebben we eigenlijk al lang het probleem dat wanneer we een VM migreren deze instabiel wordt en dan alle VM's heel erg traag worden.
    Vaak doen we dan de VM helemaal uitzetten en dan de migratie opnieuw uitvoeren terwijl hij uit staat, dus in principe de config file naar een andere node verplaatsen.
    Soms gaat dit goed, maar soms wordt dan ook alles instabiel.

    Dit hebben we met ons huidige cluster, maar met ons oude proxmox 3 cluster met een andere ZFS OmniOS SAN hebben we dit probleem ook gehad.

    Vannacht heb ik een VPS verplaatst tussen 2 nodes.
    Het leek in eerste instantie goed te gaan, het was ook perfect snel.

    Echter vanmorgen klapte heel de VPS er uit met kernel hang etc.
    Een aantal andere VPS'en waren ontzettend traag.
    Als ik dan de VPS een hard reset moet ik dit een stuk of 3x doen, anders blijft hij op een knipperend pijltje hangen. Na c.a. de 3e keer start hij ineens door. Hij doet er dan wel zo'n 20 minuten over.
    Op dat moment is de load van de node, san en netwerk helemaal niet noemenswaardig hoog.
    Ook heeft de node dan nauwelijk iowait.

    Ik heb vanalles gezocht, maar kan maar geen oorzaak vinden, heb ook al in diverse proxmox logs gebladerd, maar zie niks geks.

    Iemand ook ooit met dit probleem gehad? Of iemand die me verder kan helpen?

    Hier het log van de VM ten tijde van de problemen vanmorgen:

    Code:
    Dec  2 09:26:20 webserver14 kernel: [30600.686097] INFO: task jbd2/dm-0-8:718 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.687311]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.687499] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.687643] jbd2/dm-0-8   D ffff880836d7ef60     0   718      2    0 0x00000000
    Dec  2 09:26:20 webserver14 kernel: [30600.687647]  ffff880835e8fc00 0000000000000046 0000000000000000 0000000006282599
    Dec  2 09:26:20 webserver14 kernel: [30600.687650]  ffff88083662dda0 ffff88083c58e2f0 00001ba063ff9f34 ffffffffa5de13dc
    Dec  2 09:26:20 webserver14 kernel: [30600.687653]  000000000cc1cd8c 0000000000000000 0000000101caeb2b ffff880836d7f528
    Dec  2 09:26:20 webserver14 kernel: [30600.687656] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.687662]  [<ffffffff811f4910>] ? sync_buffer+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.687666]  [<ffffffff81547de3>] io_schedule+0x73/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.687668]  [<ffffffff811f4950>] sync_buffer+0x40/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.687671]  [<ffffffff81548f4f>] __wait_on_bit+0x5f/0x90
    Dec  2 09:26:20 webserver14 kernel: [30600.687673]  [<ffffffff811f4910>] ? sync_buffer+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.687675]  [<ffffffff81548ff8>] out_of_line_wait_on_bit+0x78/0x90
    Dec  2 09:26:20 webserver14 kernel: [30600.687679]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.687681]  [<ffffffff811f5c66>] __wait_on_buffer+0x26/0x30
    Dec  2 09:26:20 webserver14 kernel: [30600.687692]  [<ffffffffa007f116>] jbd2_journal_commit_transaction+0xaa6/0x14f0 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.687705]  [<ffffffff81092a3b>] ? try_to_del_timer_sync+0x7b/0xe0
    Dec  2 09:26:20 webserver14 kernel: [30600.687709]  [<ffffffffa0084bd0>] kjournald2+0xd0/0x230 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.687712]  [<ffffffff810abca0>] ? autoremove_wake_function+0x0/0x40
    Dec  2 09:26:20 webserver14 kernel: [30600.687716]  [<ffffffffa0084b00>] ? kjournald2+0x0/0x230 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.687718]  [<ffffffff810ab8ae>] kthread+0x9e/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.687721]  [<ffffffff8100c3ca>] child_rip+0xa/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.687724]  [<ffffffff810ab810>] ? kthread+0x0/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.687726]  [<ffffffff8100c3c0>] ? child_rip+0x0/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.687728] INFO: task flush-253:2:1248 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.687855]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.688010] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.688189] flush-253:2   D ffff880836560800     0  1248      2    0 0x00000000
    Dec  2 09:26:20 webserver14 kernel: [30600.688192]  ffff8808391ef530 0000000000000046 0000000000000000 0005120000000246
    Dec  2 09:26:20 webserver14 kernel: [30600.688195]  ffff88083c507ce0 ffff88083c507cd0 00001ba3552ba3d9 ffff880835e2e4c0
    Dec  2 09:26:20 webserver14 kernel: [30600.688198]  0000000000011200 0000000000000000 0000000101cb1c83 ffff880836560dc8
    Dec  2 09:26:20 webserver14 kernel: [30600.688200] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.688205]  [<ffffffffa007e0ad>] do_get_write_access+0x29d/0x510 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.688207]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.688211]  [<ffffffffa007e471>] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.688223]  [<ffffffffa00d2058>] __ext4_journal_get_write_access+0x38/0x80 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.688231]  [<ffffffffa00d3f22>] ext4_mb_mark_diskspace_used+0xf2/0x300 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.688239]  [<ffffffffa00db58b>] ext4_mb_new_blocks+0x3eb/0x710 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.688247]  [<ffffffffa00cc8aa>] ? ext4_ext_find_extent+0x2ba/0x340 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.688255]  [<ffffffffa00d0e17>] ext4_ext_get_blocks+0x547/0x14d0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.688261]  [<ffffffff81284593>] ? submit_bio+0x83/0x1c0
    Dec  2 09:26:20 webserver14 kernel: [30600.688265]  [<ffffffff8115b8a4>] ? release_pages+0x234/0x2a0
    Dec  2 09:26:20 webserver14 kernel: [30600.688272]  [<ffffffffa00a8015>] ext4_get_blocks+0xf5/0x2b0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.688275]  [<ffffffff8115b135>] ? pagevec_lookup_tag+0x25/0x40
    Dec  2 09:26:20 webserver14 kernel: [30600.688282]  [<ffffffffa00aa91c>] mpage_da_map_and_submit+0xac/0x3b0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.688289]  [<ffffffffa00ab494>] ext4_da_writepages+0x314/0x660 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.688307]  [<ffffffff81159601>] do_writepages+0x21/0x40
    Dec  2 09:26:20 webserver14 kernel: [30600.688312]  [<ffffffff811eb109>] __writeback_single_inode+0xa9/0x3d0
    Dec  2 09:26:20 webserver14 kernel: [30600.688316]  [<ffffffff811eb4c8>] writeback_single_inode_ub+0x98/0xd0
    Dec  2 09:26:20 webserver14 kernel: [30600.688319]  [<ffffffff811eb7aa>] writeback_sb_inodes+0xda/0x1b0
    Dec  2 09:26:20 webserver14 kernel: [30600.688323]  [<ffffffff811eb998>] writeback_inodes_wb+0x118/0x170
    Dec  2 09:26:20 webserver14 kernel: [30600.688326]  [<ffffffff811ebd2b>] wb_writeback+0x33b/0x460
    Dec  2 09:26:20 webserver14 kernel: [30600.688330]  [<ffffffff811ec005>] wb_do_writeback+0x1b5/0x260
    Dec  2 09:26:20 webserver14 kernel: [30600.688333]  [<ffffffff811ec14c>] bdi_writeback_task+0x9c/0x1f0
    Dec  2 09:26:20 webserver14 kernel: [30600.688337]  [<ffffffff8116ee50>] ? bdi_start_fn+0x0/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.688340]  [<ffffffff8116ee50>] ? bdi_start_fn+0x0/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.688344]  [<ffffffff8116eee5>] bdi_start_fn+0x95/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.688348]  [<ffffffff8116ee50>] ? bdi_start_fn+0x0/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.688351]  [<ffffffff810ab8ae>] kthread+0x9e/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.688354]  [<ffffffff8100c3ca>] child_rip+0xa/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.688357]  [<ffffffff810ab810>] ? kthread+0x0/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.688360]  [<ffffffff8100c3c0>] ? child_rip+0x0/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.688363] INFO: task jbd2/dm-2-8:1277 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.688482]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.688651] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.688795] jbd2/dm-2-8   D ffff880835d00f20     0  1277      2    0 0x00000000
    Dec  2 09:26:20 webserver14 kernel: [30600.688798]  ffff88083705bc00 0000000000000046 0000000000000000 0000000006282599
    Dec  2 09:26:20 webserver14 kernel: [30600.688801]  ffff88083af75520 ffff8808370bb1a0 00001b9fed50ab5a ffffffffa5de13dc
    Dec  2 09:26:20 webserver14 kernel: [30600.688804]  000000000cc1cd8c 0000000000000000 0000000101cae35f ffff880835d014e8
    Dec  2 09:26:20 webserver14 kernel: [30600.688807] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.688809]  [<ffffffff811f4910>] ? sync_buffer+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.688812]  [<ffffffff81547de3>] io_schedule+0x73/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.688814]  [<ffffffff811f4950>] sync_buffer+0x40/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.688816]  [<ffffffff81548f4f>] __wait_on_bit+0x5f/0x90
    Dec  2 09:26:20 webserver14 kernel: [30600.688818]  [<ffffffff811f4910>] ? sync_buffer+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.688820]  [<ffffffff81548ff8>] out_of_line_wait_on_bit+0x78/0x90
    Dec  2 09:26:20 webserver14 kernel: [30600.688822]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.688825]  [<ffffffff811f5c66>] __wait_on_buffer+0x26/0x30
    Dec  2 09:26:20 webserver14 kernel: [30600.688829]  [<ffffffffa007f116>] jbd2_journal_commit_transaction+0xaa6/0x14f0 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.688832]  [<ffffffff81092a3b>] ? try_to_del_timer_sync+0x7b/0xe0
    Dec  2 09:26:20 webserver14 kernel: [30600.688836]  [<ffffffffa0084bd0>] kjournald2+0xd0/0x230 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.688838]  [<ffffffff810abca0>] ? autoremove_wake_function+0x0/0x40
    Dec  2 09:26:20 webserver14 kernel: [30600.688842]  [<ffffffffa0084b00>] ? kjournald2+0x0/0x230 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.688844]  [<ffffffff810ab8ae>] kthread+0x9e/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.688847]  [<ffffffff8100c3ca>] child_rip+0xa/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.688849]  [<ffffffff810ab810>] ? kthread+0x0/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.688851]  [<ffffffff8100c3c0>] ? child_rip+0x0/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.688853] INFO: task flush-253:0:1280 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.688979]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.689155] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.689762] flush-253:0   D ffff880836f2c780     0  1280      2    0 0x00000000
    Dec  2 09:26:20 webserver14 kernel: [30600.689765]  ffff88083aff34c0 0000000000000046 0000000000000000 0000000000000000
    Dec  2 09:26:20 webserver14 kernel: [30600.689768]  ffff88083aff3470 ffffffff00000002 00001b9f8d4e340b ffffffff8114262e
    Dec  2 09:26:20 webserver14 kernel: [30600.689770]  000000000001ad80 0000000000000000 0000000101cadd14 ffff880836f2cd48
    Dec  2 09:26:20 webserver14 kernel: [30600.689773] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.689776]  [<ffffffff8114262e>] ? find_get_page+0x1e/0xa0
    Dec  2 09:26:20 webserver14 kernel: [30600.689781]  [<ffffffffa007e0ad>] do_get_write_access+0x29d/0x510 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.689784]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.689787]  [<ffffffffa007e471>] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.689795]  [<ffffffffa00d2058>] __ext4_journal_get_write_access+0x38/0x80 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689801]  [<ffffffffa00a6f03>] ext4_reserve_inode_write+0x73/0xa0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689805]  [<ffffffffa007d92f>] ? jbd2_journal_dirty_metadata+0xff/0x150 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.689811]  [<ffffffffa00a6f7c>] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689817]  [<ffffffffa00a7278>] ext4_dirty_inode+0x48/0x70 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689819]  [<ffffffff811eae1a>] __mark_inode_dirty+0x5a/0x2a0
    Dec  2 09:26:20 webserver14 kernel: [30600.689825]  [<ffffffffa00a6581>] ext4_da_update_reserve_space+0x111/0x2a0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689831]  [<ffffffffa00d0fe9>] ext4_ext_get_blocks+0x719/0x14d0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689835]  [<ffffffff8115b8a4>] ? release_pages+0x234/0x2a0
    Dec  2 09:26:20 webserver14 kernel: [30600.689840]  [<ffffffffa00a8015>] ext4_get_blocks+0xf5/0x2b0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689843]  [<ffffffff8115b135>] ? pagevec_lookup_tag+0x25/0x40
    Dec  2 09:26:20 webserver14 kernel: [30600.689849]  [<ffffffffa00aa91c>] mpage_da_map_and_submit+0xac/0x3b0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689853]  [<ffffffffa007d3e5>] ? jbd2_journal_start+0xb5/0x100 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.689858]  [<ffffffffa00ab494>] ext4_da_writepages+0x314/0x660 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.689862]  [<ffffffff81159601>] do_writepages+0x21/0x40
    Dec  2 09:26:20 webserver14 kernel: [30600.689864]  [<ffffffff811eb109>] __writeback_single_inode+0xa9/0x3d0
    Dec  2 09:26:20 webserver14 kernel: [30600.689867]  [<ffffffff811eb47b>] writeback_single_inode_ub+0x4b/0xd0
    Dec  2 09:26:20 webserver14 kernel: [30600.689869]  [<ffffffff811eb7aa>] writeback_sb_inodes+0xda/0x1b0
    Dec  2 09:26:20 webserver14 kernel: [30600.689871]  [<ffffffff811eb998>] writeback_inodes_wb+0x118/0x170
    Dec  2 09:26:20 webserver14 kernel: [30600.689874]  [<ffffffff811ebd2b>] wb_writeback+0x33b/0x460
    Dec  2 09:26:20 webserver14 kernel: [30600.689876]  [<ffffffff811ec005>] wb_do_writeback+0x1b5/0x260
    Dec  2 09:26:20 webserver14 kernel: [30600.689879]  [<ffffffff811ec14c>] bdi_writeback_task+0x9c/0x1f0
    Dec  2 09:26:20 webserver14 kernel: [30600.689882]  [<ffffffff8116ee50>] ? bdi_start_fn+0x0/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.689884]  [<ffffffff8116ee50>] ? bdi_start_fn+0x0/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.689887]  [<ffffffff8116eee5>] bdi_start_fn+0x95/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.689889]  [<ffffffff8116ee50>] ? bdi_start_fn+0x0/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.689891]  [<ffffffff810ab8ae>] kthread+0x9e/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.689894]  [<ffffffff8100c3ca>] child_rip+0xa/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.689896]  [<ffffffff810ab810>] ? kthread+0x0/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.689898]  [<ffffffff8100c3c0>] ? child_rip+0x0/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.689900] INFO: task lfd:1806 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.690461]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.691730] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.692773] lfd           D ffff880836cf2fe0     0  1806      1    0 0x00000080
    Dec  2 09:26:20 webserver14 kernel: [30600.692778]  ffff88083a6db9e8 0000000000000086 0000000000000000 0000000000000282
    Dec  2 09:26:20 webserver14 kernel: [30600.692783]  ffff880836cf2fe0 ffff88083a6db968 00001b9f3b82b42a ffffffff8114262e
    Dec  2 09:26:20 webserver14 kernel: [30600.692787]  000000000001ad80 0000000000000000 0000000101cad783 ffff880836cf35a8
    Dec  2 09:26:20 webserver14 kernel: [30600.692790] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.692794]  [<ffffffff8114262e>] ? find_get_page+0x1e/0xa0
    Dec  2 09:26:20 webserver14 kernel: [30600.692801]  [<ffffffffa007e0ad>] do_get_write_access+0x29d/0x510 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.692805]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.692809]  [<ffffffffa007e471>] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.692819]  [<ffffffffa00d2058>] __ext4_journal_get_write_access+0x38/0x80 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.692826]  [<ffffffffa00a6f03>] ext4_reserve_inode_write+0x73/0xa0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.692833]  [<ffffffffa00a6f7c>] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.692838]  [<ffffffffa007d3e5>] ? jbd2_journal_start+0xb5/0x100 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.692847]  [<ffffffffa00a7278>] ext4_dirty_inode+0x48/0x70 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.692851]  [<ffffffff811eae1a>] __mark_inode_dirty+0x5a/0x2a0
    Dec  2 09:26:20 webserver14 kernel: [30600.692855]  [<ffffffff811d8395>] touch_atime+0x195/0x1a0
    Dec  2 09:26:20 webserver14 kernel: [30600.692858]  [<ffffffff81144757>] generic_file_read_iter+0x337/0x640
    Dec  2 09:26:20 webserver14 kernel: [30600.692862]  [<ffffffff81144aeb>] generic_file_aio_read+0x8b/0xa0
    Dec  2 09:26:20 webserver14 kernel: [30600.692867]  [<ffffffff811b923a>] do_sync_read+0xfa/0x140
    Dec  2 09:26:20 webserver14 kernel: [30600.692870]  [<ffffffff810abca0>] ? autoremove_wake_function+0x0/0x40
    Dec  2 09:26:20 webserver14 kernel: [30600.692874]  [<ffffffff811bf784>] ? cp_new_stat+0xe4/0x100
    Dec  2 09:26:20 webserver14 kernel: [30600.692877]  [<ffffffff811b9b15>] vfs_read+0xb5/0x1a0
    Dec  2 09:26:20 webserver14 kernel: [30600.692880]  [<ffffffff811bae36>] ? fget_light_pos+0x16/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.692883]  [<ffffffff811b9f11>] sys_read+0x51/0xb0
    Dec  2 09:26:20 webserver14 kernel: [30600.692887]  [<ffffffff8110166e>] ? __audit_syscall_exit+0x25e/0x290
    Dec  2 09:26:20 webserver14 kernel: [30600.692890]  [<ffffffff8100b1a2>] system_call_fastpath+0x16/0x1b
    Dec  2 09:26:20 webserver14 kernel: [30600.692894] INFO: task communicator:172331 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.693466]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.694506] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.695528] communicator  D ffff8808378643c0     0 172331      1    0 0x00000080
    Dec  2 09:26:20 webserver14 kernel: [30600.695532]  ffff8806bcb57ab8 0000000000000086 0000000000000000 ffffffff814d9541
    Dec  2 09:26:20 webserver14 kernel: [30600.695536]  ffff8806bcb57a38 ffffffff814cfd20 00001ba2c0f42f65 ffffffff814d1883
    Dec  2 09:26:20 webserver14 kernel: [30600.695540]  ffff8806cc228800 0000000000000000 0000000101cb12cd ffff880837864988
    Dec  2 09:26:20 webserver14 kernel: [30600.695544] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.695548]  [<ffffffff814d9541>] ? tcp_send_delayed_ack+0xf1/0x100
    Dec  2 09:26:20 webserver14 kernel: [30600.695553]  [<ffffffff814cfd20>] ? __tcp_ack_snd_check+0x70/0xa0
    Dec  2 09:26:20 webserver14 kernel: [30600.695557]  [<ffffffff814d1883>] ? tcp_data_snd_check+0x33/0x130
    Dec  2 09:26:20 webserver14 kernel: [30600.695564]  [<ffffffffa007e0ad>] do_get_write_access+0x29d/0x510 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.695568]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.695572]  [<ffffffffa007e471>] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.695581]  [<ffffffffa00d2058>] __ext4_journal_get_write_access+0x38/0x80 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.695588]  [<ffffffffa00a2134>] ext4_new_inode+0x3c4/0x12b0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.695594]  [<ffffffffa007d3e5>] ? jbd2_journal_start+0xb5/0x100 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.695601]  [<ffffffffa00b26e8>] ext4_create+0x148/0x230 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.695605]  [<ffffffff811c97d0>] vfs_create+0xd0/0xf0
    Dec  2 09:26:20 webserver14 kernel: [30600.695608]  [<ffffffff811cde9a>] do_filp_open+0xb0a/0xd10
    Dec  2 09:26:20 webserver14 kernel: [30600.695611]  [<ffffffff811ccfd4>] ? user_path_at+0x64/0xa0
    Dec  2 09:26:20 webserver14 kernel: [30600.695615]  [<ffffffff811bf784>] ? cp_new_stat+0xe4/0x100
    Dec  2 09:26:20 webserver14 kernel: [30600.695618]  [<ffffffff811c81e5>] ? getname_flags+0x175/0x260
    Dec  2 09:26:20 webserver14 kernel: [30600.695622]  [<ffffffff811db7e2>] ? alloc_fd+0x92/0x160
    Dec  2 09:26:20 webserver14 kernel: [30600.695624]  [<ffffffff811b5db7>] do_sys_open+0x67/0x130
    Dec  2 09:26:20 webserver14 kernel: [30600.695627]  [<ffffffff811b5ec0>] sys_open+0x20/0x30
    Dec  2 09:26:20 webserver14 kernel: [30600.695630]  [<ffffffff8100b1a2>] system_call_fastpath+0x16/0x1b
    Dec  2 09:26:20 webserver14 kernel: [30600.695636] INFO: task da-popb4smtp:2516 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.696223]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.697281] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.698308] da-popb4smtp  D ffff8807cfaf8340     0  2516      1    0 0x00000080
    Dec  2 09:26:20 webserver14 kernel: [30600.698312]  ffff8807cfb1f9f8 0000000000000086 0000000000000000 0000000006282599
    Dec  2 09:26:20 webserver14 kernel: [30600.698317]  ffff8807cfb1f9b8 ffff880836cacad0 00001b9ecdb430cd ffffffffa5de13dc
    Dec  2 09:26:20 webserver14 kernel: [30600.698320]  000000000cc1cd8c 0000000000000000 0000000101cad088 ffff8807cfaf8908
    Dec  2 09:26:20 webserver14 kernel: [30600.698324] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.698329]  [<ffffffff81142820>] ? sync_page+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.698333]  [<ffffffff81547de3>] io_schedule+0x73/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.698336]  [<ffffffff8114285d>] sync_page+0x3d/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.698339]  [<ffffffff81548f4f>] __wait_on_bit+0x5f/0x90
    Dec  2 09:26:20 webserver14 kernel: [30600.698343]  [<ffffffff81142a93>] wait_on_page_bit+0x73/0x80
    Dec  2 09:26:20 webserver14 kernel: [30600.698346]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.698351]  [<ffffffff8115b172>] ? pagevec_lookup+0x22/0x30
    Dec  2 09:26:20 webserver14 kernel: [30600.698354]  [<ffffffff8115d170>] truncate_inode_pages_range+0x300/0x4f0
    Dec  2 09:26:20 webserver14 kernel: [30600.698360]  [<ffffffff8115d3f5>] truncate_inode_pages+0x15/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.698363]  [<ffffffff8115d44f>] truncate_pagecache+0x4f/0x70
    Dec  2 09:26:20 webserver14 kernel: [30600.698365]  [<ffffffff8115d489>] truncate_setsize+0x19/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.698368]  [<ffffffff8115d4ce>] vmtruncate+0x3e/0x70
    Dec  2 09:26:20 webserver14 kernel: [30600.698371]  [<ffffffff811da4e0>] inode_setattr+0x30/0x60
    Dec  2 09:26:20 webserver14 kernel: [30600.698378]  [<ffffffffa00a9ff1>] ext4_setattr+0x101/0x3a0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.698381]  [<ffffffff811da871>] notify_change+0x111/0x340
    Dec  2 09:26:20 webserver14 kernel: [30600.698388]  [<ffffffff811bbe85>] ? __sb_start_write+0xd5/0x1b0
    Dec  2 09:26:20 webserver14 kernel: [30600.698392]  [<ffffffff811b7504>] do_truncate+0x64/0xa0
    Dec  2 09:26:20 webserver14 kernel: [30600.698395]  [<ffffffff811cdc00>] do_filp_open+0x870/0xd10
    Dec  2 09:26:20 webserver14 kernel: [30600.698400]  [<ffffffff812a2201>] ? cpumask_any_but+0x31/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.698403]  [<ffffffff811c81e5>] ? getname_flags+0x175/0x260
    Dec  2 09:26:20 webserver14 kernel: [30600.698406]  [<ffffffff811db7e2>] ? alloc_fd+0x92/0x160
    Dec  2 09:26:20 webserver14 kernel: [30600.698409]  [<ffffffff811b5db7>] do_sys_open+0x67/0x130
    Dec  2 09:26:20 webserver14 kernel: [30600.698413]  [<ffffffff811b5ec0>] sys_open+0x20/0x30
    Dec  2 09:26:20 webserver14 kernel: [30600.698416]  [<ffffffff8100b1a2>] system_call_fastpath+0x16/0x1b
    Dec  2 09:26:20 webserver14 kernel: [30600.698420] INFO: task collectl:3237 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.698965]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.699982] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.700995] collectl      D ffff88078de75120     0  3237      1    0 0x00000084
    Dec  2 09:26:20 webserver14 kernel: [30600.700998]  ffff88078df6ba58 0000000000000086 0000000000000000 ffff88078df6bd38
    Dec  2 09:26:20 webserver14 kernel: [30600.701002]  000000000000000f ffff880836c67180 00001ba61cbc934e ffff88078df6bd78
    Dec  2 09:26:20 webserver14 kernel: [30600.701005]  0000000000400000 0000000000000000 0000000101cb4af6 ffff88078de756e8
    Dec  2 09:26:20 webserver14 kernel: [30600.701007] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.701012]  [<ffffffffa007e0ad>] do_get_write_access+0x29d/0x510 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.701016]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.701020]  [<ffffffffa007e471>] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.701028]  [<ffffffffa00d2058>] __ext4_journal_get_write_access+0x38/0x80 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.701035]  [<ffffffffa00a6f03>] ext4_reserve_inode_write+0x73/0xa0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.701041]  [<ffffffffa00a6f7c>] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.701046]  [<ffffffffa007d3e5>] ? jbd2_journal_start+0xb5/0x100 [jbd2]
    Dec  2 09:26:20 webserver14 kernel: [30600.701052]  [<ffffffffa00a7278>] ext4_dirty_inode+0x48/0x70 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.701056]  [<ffffffff811eae1a>] __mark_inode_dirty+0x5a/0x2a0
    Dec  2 09:26:20 webserver14 kernel: [30600.701060]  [<ffffffff811d8161>] file_update_time+0x121/0x1c0
    Dec  2 09:26:20 webserver14 kernel: [30600.701066]  [<ffffffff81144fe4>] __generic_file_write_iter+0x1f4/0x420
    Dec  2 09:26:20 webserver14 kernel: [30600.701074]  [<ffffffff81145295>] __generic_file_aio_write+0x85/0xa0
    Dec  2 09:26:20 webserver14 kernel: [30600.701078]  [<ffffffff81145349>] generic_file_aio_write+0x99/0x110
    Dec  2 09:26:20 webserver14 kernel: [30600.701084]  [<ffffffffa00a00a8>] ext4_file_write+0x58/0x190 [ext4]
    Dec  2 09:26:20 webserver14 kernel: [30600.701088]  [<ffffffff811b90f2>] do_sync_write+0xf2/0x140
    Dec  2 09:26:20 webserver14 kernel: [30600.701091]  [<ffffffff811b93d8>] vfs_write+0xb8/0x1a0
    Dec  2 09:26:20 webserver14 kernel: [30600.701094]  [<ffffffff811bae36>] ? fget_light_pos+0x16/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.701097]  [<ffffffff811b9e61>] sys_write+0x51/0xb0
    Dec  2 09:26:20 webserver14 kernel: [30600.701100]  [<ffffffff8110166e>] ? __audit_syscall_exit+0x25e/0x290
    Dec  2 09:26:20 webserver14 kernel: [30600.701104]  [<ffffffff8100b1a2>] system_call_fastpath+0x16/0x1b
    Dec  2 09:26:20 webserver14 kernel: [30600.701109] INFO: task python2.7:6689 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.701663]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.702712] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.703732] python2.7     D ffff880838666ba0     0  6689   6665    0 0x00000080
    Dec  2 09:26:20 webserver14 kernel: [30600.703737]  ffff880704603cb8 0000000000000086 0000000000000000 0000000006282599
    Dec  2 09:26:20 webserver14 kernel: [30600.703741]  ffff880704603c78 ffff88078e055280 00001b9ee7f3c232 ffffffffa5de13dc
    Dec  2 09:26:20 webserver14 kernel: [30600.703745]  000000000cc1cd8c 0000000000000000 0000000101cad23c ffff880838667168
    Dec  2 09:26:20 webserver14 kernel: [30600.703751] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.703754]  [<ffffffff81142820>] ? sync_page+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.703758]  [<ffffffff81547de3>] io_schedule+0x73/0xc0
    Dec  2 09:26:20 webserver14 kernel: [30600.703760]  [<ffffffff8114285d>] sync_page+0x3d/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.703762]  [<ffffffff81548f4f>] __wait_on_bit+0x5f/0x90
    Dec  2 09:26:20 webserver14 kernel: [30600.703766]  [<ffffffff81142a93>] wait_on_page_bit+0x73/0x80
    Dec  2 09:26:20 webserver14 kernel: [30600.703770]  [<ffffffff810abd20>] ? wake_bit_function+0x0/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.703774]  [<ffffffff8115b135>] ? pagevec_lookup_tag+0x25/0x40
    Dec  2 09:26:20 webserver14 kernel: [30600.703777]  [<ffffffff8114300b>] wait_on_page_writeback_range+0xfb/0x190
    Dec  2 09:26:20 webserver14 kernel: [30600.703783]  [<ffffffff811431d8>] filemap_write_and_wait_range+0x78/0x90
    Dec  2 09:26:20 webserver14 kernel: [30600.703787]  [<ffffffff811f0be6>] vfs_fsync_range+0x1c6/0x260
    Dec  2 09:26:20 webserver14 kernel: [30600.703790]  [<ffffffff811f0ced>] vfs_fsync+0x1d/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.703794]  [<ffffffff811f0d73>] do_fsync+0x83/0xe0
    Dec  2 09:26:20 webserver14 kernel: [30600.703798]  [<ffffffff811f0e00>] sys_fsync+0x10/0x20
    Dec  2 09:26:20 webserver14 kernel: [30600.703800]  [<ffffffff8100b1a2>] system_call_fastpath+0x16/0x1b
    Dec  2 09:26:20 webserver14 kernel: [30600.703804] INFO: task nginx:214961 blocked for more than 120 seconds.
    Dec  2 09:26:20 webserver14 kernel: [30600.704385]       Tainted: P           -- ------------    2.6.32-773.26.1.lve1.4.35.el6.x86_64 #1 
    Dec  2 09:26:20 webserver14 kernel: [30600.705474] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Dec  2 09:26:20 webserver14 kernel: [30600.706478] nginx         D ffff880838b383c0     0 214961 214958    0 0x00000080
    Dec  2 09:26:20 webserver14 kernel: [30600.706482]  ffff8806bcb5fe68 0000000000000086 0000000000000000 ffff88083ccea900
    Dec  2 09:26:20 webserver14 kernel: [30600.706486]  ffff8806bcb5fdc8 0000000000000282 00001b9ffa1b1077 ffffffff811d8aef
    Dec  2 09:26:20 webserver14 kernel: [30600.706490]  ffff88080fa0c6c8 0000000000000000 0000000101cae437 ffff880838b38988
    Dec  2 09:26:20 webserver14 kernel: [30600.706494] Call Trace:
    Dec  2 09:26:20 webserver14 kernel: [30600.706497]  [<ffffffff811d8aef>] ? destroy_inode+0x2f/0x60
    Dec  2 09:26:20 webserver14 kernel: [30600.706502]  [<ffffffff81549616>] __mutex_lock_slowpath+0x96/0x210
    Dec  2 09:26:20 webserver14 kernel: [30600.706505]  [<ffffffff8154913b>] mutex_lock+0x2b/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.706509]  [<ffffffff811bae5f>] fget_light_pos+0x3f/0x50
    Dec  2 09:26:20 webserver14 kernel: [30600.706512]  [<ffffffff811b9e38>] sys_write+0x28/0xb0
    Dec  2 09:26:20 webserver14 kernel: [30600.706515]  [<ffffffff8110166e>] ? __audit_syscall_exit+0x25e/0x290
    Dec  2 09:26:20 webserver14 kernel: [30600.706518]  [<ffffffff8100b1a2>] system_call_fastpath+0x16/0x1b
    Dec  2 09:26:22 webserver14 pure-ftpd: (?@37.247.42.6) [INFO] New connection from 37.247.42.6
    Dec  2 09:26:27 webserver14 kernel: [30608.031515] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:50:e6:37:f5:ad:00:08:a4:56:50:00:08:00 SRC=169.54.233.116 DST=37.247.42.41 LEN=40 TOS=0x00 PREC=0x00 TTL=243 ID=54321 PROTO=TCP SPT=10978 DPT=8443 WINDOW=65535 RES=0x00 SYN URGP=0
    Dec  2 09:26:47 webserver14 kernel: [30628.042582] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:50:e6:37:f5:ad:00:08:a4:56:50:00:08:00 SRC=124.142.250.81 DST=37.247.42.41 LEN=40 TOS=0x00 PREC=0x00 TTL=45 ID=3738 PROTO=TCP SPT=51112 DPT=23 WINDOW=65431 RES=0x00 SYN URGP=0
    Dec  2 09:26:56 webserver14 kernel: [30636.775555] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:50:e6:37:f5:ad:00:08:a4:56:50:00:08:00 SRC=190.128.123.27 DST=37.247.42.41 LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=21580 PROTO=TCP SPT=41938 DPT=2323 WINDOW=50110 RES=0x00 SYN URGP=0
    Dec  2 09:27:15 webserver14 kernel: [30655.477517] Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=2a:50:e6:37:f5:ad:00:08:a4:56:50:00:08:00 SRC=196.52.43.65 DST=37.247.42.41 LEN=40 TOS=0x00 PREC=0x00 TTL=246 ID=54321 PROTO=TCP SPT=10978 DPT=9000 WINDOW=65535 RES=0x00 SYN URGP=0
    Dec  2 09:27:18 webserver14 pure-ftpd: (?@37.247.42.6) [INFO] Logout.
    Dec  2 09:27:18 webserver14 pure-ftpd: (?@37.247.42.6) [INFO] Logout.
    Dec  2 09:27:18 webserver14 pure-ftpd: (?@37.247.42.6) [INFO] Logout.
    Dec  2 09:27:18 webserver14 pure-ftpd: (?@37.247.42.6) [INFO] Logout.
    Dec  2 09:27:22 webserver14 pure-ftpd: (?@91.196.94.214) [INFO] New connection from 91.196.94.214
    Dec  2 09:27:23 webserver14 lfd[1806]: SYSLOG check [J0JV10tNro6CoQSCHjrX2qWWN8wbp]
    Dec  2 09:27:23 webserver14 pure-ftpd: (?@37.247.42.6) [INFO] New connection from 37.247.42.6
    Dec  2 09:27:24 webserver14 pure-ftpd: (?@37.247.42.6) [INFO] Logout.
    Dec  2 09:44:02 webserver14 kernel: imklog 5.8.10, log source = /proc/kmsg started.
    Dec  2 09:44:02 webserver14 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="2033" x-info="http://www.rsyslog.com"] start
    Dec  2 09:44:02 webserver14 kernel: [    0.000000] Initializing cgroup subsys cpuset
    Dec  2 09:44:02 webserver14 kernel: [    0.000000] Initializing cgroup subsys cpu
    Dec  2 09:44:02 webserver14 kernel: [    0.000000] Linux version 2.6.32-773.26.1.lve1.4.43.el6.x86_64 (mockbuild@build.cloudlinux.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) ) #1  SMP Mon Oct 30 02:31:47 EDT 2017
    Hoop dat er iemand mij kan verlossen met een oplossing en dat ik voortaan gewoon de migratie functie kan gaan gebruiken

    Nu migreren we alleen een VM als het echt hard nodig is en bijna volgen er altijd problemen naderhand.

  2. #2
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter
    Kom dit nog geregeld tegen in m'n proxmox node log:

    Code:
    Dec  2 09:43:41 node34 kernel: [14639926.997391] kvm [24172]: vcpu0 unhandled rdmsr: 0xce
    Dec  2 09:43:41 node34 kernel: [14639927.130637] kvm [24172]: vcpu0 unhandled rdmsr: 0x345
    Dec  2 09:43:41 node34 kernel: [14639927.130709] kvm_set_msr_common: 246 callbacks suppressed
    Dec  2 09:43:41 node34 kernel: [14639927.130711] kvm [24172]: vcpu0 unhandled wrmsr: 0x680 data 0
    Dec  2 09:43:41 node34 kernel: [14639927.130740] kvm [24172]: vcpu0 unhandled wrmsr: 0x6c0 data 0
    Dec  2 09:43:41 node34 kernel: [14639927.130768] kvm [24172]: vcpu0 unhandled wrmsr: 0x681 data 0
    Dec  2 09:43:41 node34 kernel: [14639927.130796] kvm [24172]: vcpu0 unhandled wrmsr: 0x6c1 data 0
    Dec  2 09:43:41 node34 kernel: [14639927.130824] kvm [24172]: vcpu0 unhandled wrmsr: 0x682 data 0
    Dec  2 09:43:41 node34 kernel: [14639927.130851] kvm [24172]: vcpu0 unhandled wrmsr: 0x6c2 data 0

    Code:
    Dec  2 10:16:51 node34 pvestatd[2243]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.30.10_id_rsa root@192.168.30.10 zfs get -o value -Hp available,used SSDpoolTwo' failed: exit code 255
    Code:
    Dec  2 09:35:51 node34 kernel: [14639456.835439] kvm: zapping shadow pages for mmio generation wraparound
    Dec  2 09:35:51 node34 kernel: [14639456.835888] kvm: zapping shadow pages for mmio generation wraparound
    Laatst gewijzigd door tvdh; 02/12/17 om 11:30.

  3. #3
    VM's instabiel na migratie
    geregistreerd gebruiker
    1.892 Berichten
    Ingeschreven
    17/08/05

    Locatie
    Amsterdam

    Post Thanks / Like
    Mentioned
    34 Post(s)
    Tagged
    0 Thread(s)
    35 Berichten zijn liked


    Naam: Wieger Bontekoe
    Bedrijf: Skynet ICT B.V.
    Functie: CEO
    URL: skynet-ict.nl
    Registrar SIDN: Nee
    View wbontekoe's profile on LinkedIn

    Hoe is alles aangesloten ? Klinkt mij als een netwerk probleem.
    Skynet ICT B.V. - The cause of the problem is: the printer thinks its a router.

  4. #4
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter
    10gbit ubiquiti switch. Mtu 9000. Zie op dat moment niet echt iets schokkends in observium over de switch.

    In het verleden had ik cisco 1gbit, toen had ik het probleem ook al. (Toen eigenlijk gewoon opgelost door geen migraties te doen, maar soms moet het ivm. Ram toewijzing.)

  5. #5
    VM's instabiel na migratie
    geregistreerd gebruiker
    1.892 Berichten
    Ingeschreven
    17/08/05

    Locatie
    Amsterdam

    Post Thanks / Like
    Mentioned
    34 Post(s)
    Tagged
    0 Thread(s)
    35 Berichten zijn liked


    Naam: Wieger Bontekoe
    Bedrijf: Skynet ICT B.V.
    Functie: CEO
    URL: skynet-ict.nl
    Registrar SIDN: Nee
    View wbontekoe's profile on LinkedIn

    Citaat Oorspronkelijk geplaatst door tvdh Bekijk Berichten
    10gbit ubiquiti switch. Mtu 9000. Zie op dat moment niet echt iets schokkends in observium over de switch.

    In het verleden had ik cisco 1gbit, toen had ik het probleem ook al. (Toen eigenlijk gewoon opgelost door geen migraties te doen, maar soms moet het ivm. Ram toewijzing.)
    En de NIC's van je host? kan het zijn dat er ergens buffers vol lopen? Zijn de waardes van je SFP's goed? Kan een rotte SFP zijn..

    ifconfig, kijk naar eth error of drop packets.
    Skynet ICT B.V. - The cause of the problem is: the printer thinks its a router.

  6. #6
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter
    Hier geen errors iig:

    eth4 Link encap:Ethernet HWaddr 00:1b:21:6e:08:28
    inet addr:192.168.30.234 Bcast:192.168.30.255 Mask:255.255.255.0
    inet6 addr: fe80::21b:21ff:fe6e:828/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
    RX packets:1903399203 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1462398726 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:8443234840661 (7.6 TiB) TX bytes:7169123309510 (6.5 TiB)

  7. #7
    VM's instabiel na migratie
    Internet Services
    3.204 Berichten
    Ingeschreven
    27/03/06

    Locatie
    Utrecht

    Post Thanks / Like
    Mentioned
    14 Post(s)
    Tagged
    0 Thread(s)
    43 Berichten zijn liked


    Naam: Jeroen
    View nl.linkedin.com/in/jeroenvheugten's profile on LinkedIn

    Dec 2 10:16:51 node34 pvestatd[2243]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.30.10_id_rsa root@192.168.30.10 zfs get -o value -Hp available,used SSDpoolTwo' failed: exit code 255
    Als je met de hand "zfs get all SSDpoolTwo" opvraagd op de ZFS server, krijg je dan gewoon output? Zo niet, kun je kijken of er processen in D-state hangen op de ZFS server?

  8. #8
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter
    Citaat Oorspronkelijk geplaatst door SF-Jeroen Bekijk Berichten
    Als je met de hand "zfs get all SSDpoolTwo" opvraagd op de ZFS server, krijg je dan gewoon output? Zo niet, kun je kijken of er processen in D-state hangen op de ZFS server?
    Ja, gaat gewoon goed:

    Code:
     zfs get all SSDpoolTwo
    NAME        PROPERTY              VALUE                  SOURCE
    SSDpoolTwo  type                  filesystem             -
    SSDpoolTwo  creation              Fri Nov 18  5:01 2016  -
    SSDpoolTwo  used                  2.38T                  -
    SSDpoolTwo  available             396G                   -
    SSDpoolTwo  referenced            26.9K                  -
    SSDpoolTwo  compressratio         1.19x                  -
    SSDpoolTwo  mounted               yes                    -
    SSDpoolTwo  quota                 none                   default
    SSDpoolTwo  reservation           none                   default
    SSDpoolTwo  recordsize            32K                    local
    SSDpoolTwo  mountpoint            /SSDpoolTwo            default
    SSDpoolTwo  sharenfs              off                    default
    SSDpoolTwo  checksum              on                     default
    SSDpoolTwo  compression           lz4                    local
    SSDpoolTwo  atime                 on                     default
    SSDpoolTwo  devices               on                     default
    SSDpoolTwo  exec                  on                     default
    SSDpoolTwo  setuid                on                     default
    SSDpoolTwo  readonly              off                    default
    SSDpoolTwo  zoned                 off                    default
    SSDpoolTwo  snapdir               hidden                 default
    SSDpoolTwo  aclmode               discard                default
    SSDpoolTwo  aclinherit            restricted             default
    SSDpoolTwo  canmount              on                     default
    SSDpoolTwo  xattr                 on                     default
    SSDpoolTwo  copies                1                      default
    SSDpoolTwo  version               5                      -
    SSDpoolTwo  utf8only              off                    -
    SSDpoolTwo  normalization         none                   -
    SSDpoolTwo  casesensitivity       sensitive              -
    SSDpoolTwo  vscan                 off                    default
    SSDpoolTwo  nbmand                off                    default
    SSDpoolTwo  sharesmb              off                    default
    SSDpoolTwo  refquota              none                   default
    SSDpoolTwo  refreservation        256G                   local
    SSDpoolTwo  primarycache          all                    default
    SSDpoolTwo  secondarycache        all                    default
    SSDpoolTwo  usedbysnapshots       26.9K                  -
    SSDpoolTwo  usedbydataset         26.9K                  -
    SSDpoolTwo  usedbychildren        2.13T                  -
    SSDpoolTwo  usedbyrefreservation  256G                   -
    SSDpoolTwo  logbias               latency                default
    SSDpoolTwo  dedup                 off                    default
    SSDpoolTwo  mlslabel              none                   default
    SSDpoolTwo  sync                  standard               default
    SSDpoolTwo  refcompressratio      1.00x                  -
    SSDpoolTwo  written               13.5K                  -
    SSDpoolTwo  logicalused           2.34T                  -
    SSDpoolTwo  logicalreferenced     10.5K                  -
    SSDpoolTwo  filesystem_limit      none                   default
    SSDpoolTwo  snapshot_limit        none                   default
    SSDpoolTwo  filesystem_count      none                   default
    SSDpoolTwo  snapshot_count        none                   default
    SSDpoolTwo  redundant_metadata    all                    default

  9. #9
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter
    Vanmorgen was het helaas weer raak, VPS is weer down gegaan.
    Vannacht helaas ook weer kort meerdere VPS'en down gegaan.

    Vanmorgen was het weer die specifieke VPS met dezelfde meldingen hung op het scherm.
    Toen ben ik via de verschillende nodes ingelogd in SSH.
    Hier ben ik via /mnt/pve/SSDtwo naar de SAN gegaan, ook naar andere volumes op deze SAN.
    Dit ging echt extreem traag, als ik ls uitvoerde duurde dit echt meerdere seconden.
    Hierna heb ik de VPS gestopt en ging het communiceren naar de SAN weer super snel.
    De VPS gestart en het leek net of er niets gebeurd was, hij doet het weer perfect.

    Iemand enig idee?

  10. #10
    VM's instabiel na migratie
    Internet Services
    3.204 Berichten
    Ingeschreven
    27/03/06

    Locatie
    Utrecht

    Post Thanks / Like
    Mentioned
    14 Post(s)
    Tagged
    0 Thread(s)
    43 Berichten zijn liked


    Naam: Jeroen
    View nl.linkedin.com/in/jeroenvheugten's profile on LinkedIn

    Als het uitzetten van een specifieke VM het probleem oplost zou ik eerst eens kijken wat voor I/O die VM doet wanneer de ZFS traag is. Dit kan op de client, maar ook op de ZFS fileserver (met Dtrace). Daarnaast misschien ook handig om naar te kijken;

    - Heb je genoeg NFS threads op de server
    - Zijn je NFS mount options correct op de client (hypervisor?)



  11. #11
    VM's instabiel na migratie
    moderator
    6.028 Berichten
    Ingeschreven
    21/05/03

    Locatie
    NPT - BELGIUM

    Post Thanks / Like
    Mentioned
    39 Post(s)
    Tagged
    0 Thread(s)
    481 Berichten zijn liked


    Naam: Dennis de Houx
    Bedrijf: All In One
    Functie: Zaakvoerder
    URL: www.all-in-one.be
    Ondernemingsnummer: 0867670047

    Bijkomend op wat @SF-Jeroen aangeeft kun je misschien ook limieten per VM gaan instellen wat betreft read/write speed en iops, zo voorkom je dat 1 enkele VM je volledige platform onderuit gaat trekken wat nu het geval is.

  12. #12
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter
    Citaat Oorspronkelijk geplaatst door SF-Jeroen Bekijk Berichten
    Als het uitzetten van een specifieke VM het probleem oplost zou ik eerst eens kijken wat voor I/O die VM doet wanneer de ZFS traag is. Dit kan op de client, maar ook op de ZFS fileserver (met Dtrace). Daarnaast misschien ook handig om naar te kijken;

    - Heb je genoeg NFS threads op de server
    - Zijn je NFS mount options correct op de client (hypervisor?)
    De NFS mount opties zijn de standaard van Proxmox.

    De NFS server is ingesteld zoals napp-it voorschrijft.
    Current NFS properties: sharectl get nfs

    servers=16
    lockd_listen_backlog=32
    lockd_servers=20
    lockd_retransmit_timeout=5
    grace_period=90
    server_versmin=2
    server_versmax=4
    client_versmin=2
    client_versmax=4
    server_delegation=on
    nfsmapid_domain=
    max_connections=-1
    protocol=ALL
    listen_backlog=32
    device=
    mountd_listen_backlog=64
    mountd_max_threads=16


    Er is nagenoeg geen iowait (volgens de proxmox web interface) op het moment dat die VM hangt.

  13. #13
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter
    Citaat Oorspronkelijk geplaatst door The-BosS Bekijk Berichten
    Bijkomend op wat @SF-Jeroen aangeeft kun je misschien ook limieten per VM gaan instellen wat betreft read/write speed en iops, zo voorkom je dat 1 enkele VM je volledige platform onderuit gaat trekken wat nu het geval is.
    Dat hebben we ook gedaan.
    Dit werkt ook vrij goed, want vroeger kwam het inderdaad wel eens voor dat 1 vm alles zeer traag maakte.
    Het probleem komt niet daar door dus..

  14. #14
    VM's instabiel na migratie
    geregistreerd gebruiker
    285 Berichten
    Ingeschreven
    03/02/12

    Locatie
    Purmerend

    Post Thanks / Like
    Mentioned
    5 Post(s)
    Tagged
    0 Thread(s)
    0 Berichten zijn liked


    Naam: Merijn
    Bedrijf: Virtuality
    Functie: CEO
    URL: www.virtual-it.nl
    KvK nummer: 37138605
    View nl.linkedin.com/in/mevertse's profile on LinkedIn

    Begrijp ik goed uit jouw verhaal dat afgelopen nacht er dus geen sprake was van migratie maar VPSen spontaan traag werden en de meeste ook spontaan weer goed zijn gaan reageren?

  15. #15
    VM's instabiel na migratie
    geregistreerd gebruiker
    474 Berichten
    Ingeschreven
    22/03/07

    Locatie
    Soerendonk

    Post Thanks / Like
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)
    5 Berichten zijn liked


    Registrar SIDN: nee
    KvK nummer: nvt
    Ondernemingsnummer: nvt

    Thread Starter
    Citaat Oorspronkelijk geplaatst door virtuality Bekijk Berichten
    Begrijp ik goed uit jouw verhaal dat afgelopen nacht er dus geen sprake was van migratie maar VPSen spontaan traag werden en de meeste ook spontaan weer goed zijn gaan reageren?
    Klopt, maar mijn ervaring is dat dat komt om dat ik nu een paar dagen terug 1 VPS heb gemigreerd.
    Dit heb ik in het verleden ook gehad.
    Waarschijnlijk als ik de VPS terug zet op de originele node zal alles weer volledig stabiel worden. Zo is het in het verleden altijd gegaan namelijk.
    Nu laat ik hem op de nieuwe node staan omdat ik dit probleem eigenlijk definitief wil oplossen...

Pagina 1 van de 3 1 2 3 LaatsteLaatste

Labels voor dit Bericht

Webhostingtalk.nl

Contact

  • Rokin 113-115
  • 1012 KP, Amsterdam
  • Nederland
  • Contact
© Copyright 2001-2021 Webhostingtalk.nl.
Web Statistics