Rozdíly

Zde můžete vidět rozdíly mezi vybranou verzí a aktuální verzí dané stránky.

Odkaz na výstup diff

Obě strany předchozí revize Předchozí verze
Následující verze
Předchozí verze
proxmox_lxc [2020/01/04 11:06]
simandl
proxmox_lxc [2021/12/11 01:19] (aktuální)
simandl
Řádek 2: Řádek 2:
  
 ===== NFS LXC ===== ===== NFS LXC =====
- 
  
 https://​unix.stackexchange.com/​questions/​450308/​how-to-allow-specific-proxmox-lxc-containers-to-mount-nfs-shares-on-the-network https://​unix.stackexchange.com/​questions/​450308/​how-to-allow-specific-proxmox-lxc-containers-to-mount-nfs-shares-on-the-network
Řádek 8: Řádek 7:
 Yes, it's possible. Simply create a new profile (based on lxc-container-default-cgns) and use it for the specific containers. So first run Yes, it's possible. Simply create a new profile (based on lxc-container-default-cgns) and use it for the specific containers. So first run
  
-cp /​etc/​apparmor.d/​lxc/​lxc-default-cgns /​etc/​apparmor.d/​lxc/​lxc-default-with-nfs+  ​cp /​etc/​apparmor.d/​lxc/​lxc-default-cgns /​etc/​apparmor.d/​lxc/​lxc-default-with-nfs
 Then edit the new file /​etc/​apparmor.d/​lxc/​lxc-default-with-nfs:​ Then edit the new file /​etc/​apparmor.d/​lxc/​lxc-default-with-nfs:​
  
Řádek 14: Řádek 13:
 put the NFS configuration (see below) just before the closing bracket (}) put the NFS configuration (see below) just before the closing bracket (})
 NFS configuration NFS configuration
 +
 Either write Either write
  
Řádek 27: Řádek 27:
  
 Use the new profile Use the new profile
-Edit /​etc/​pve/​lxc/​${container_id}.conf and append this line: +  ​Edit /​etc/​pve/​lxc/​${container_id}.conf and append this line: 
- +  lxc.apparmor.profile:​ lxc-container-default-with-nfs 
-lxc.apparmor.profile:​ lxc-container-default-with-nfs +   
-Then stop the container and start it again, e.g. like this: +  ​Then stop the container and start it again, e.g. like this: 
- +   
-pct stop ${container_id} && pct start ${container_id}+  pct stop ${container_id} && pct start ${container_id}
 Now mounting NFS shares should work. Now mounting NFS shares should work.
  
Řádek 39: Řádek 39:
 https://​www.reddit.com/​r/​Proxmox/​comments/​avk2gx/​help_cluster_not_ready_no_quorum_500/​ https://​www.reddit.com/​r/​Proxmox/​comments/​avk2gx/​help_cluster_not_ready_no_quorum_500/​
  
- systemctl stop pve-cluster +  ​systemctl stop pve-cluster 
- ​systemctl stop corosync +  systemctl stop corosync 
- ​pmxcfs -l +  pmxcfs -l 
- rm /​etc/​pve/​corosync.conf +  rm /​etc/​pve/​corosync.conf 
- rm /​etc/​corosync/​* +  rm /​etc/​corosync/​* 
- ​killall pmxcfs +  killall pmxcfs 
- ​systemctl start pve-cluster+  systemctl start pve-cluster
  
 ====== move vm to another node ====== ====== move vm to another node ======
Řádek 55: Řádek 55:
  
 ====== run unms in lxc ====== ====== run unms in lxc ======
-   lxc.apparmor.profile = unconfined +  ​lxc.apparmor.profile = unconfined 
-   ​do /​usr/​lib/​lxc/​ID.conf +  do /​usr/​lib/​lxc/​ID.conf 
-   ​pak lxc-stop -n ID +  pak lxc-stop -n ID 
-   ​lxc-start -n ID +  lxc-start -n ID 
-   ​by default je to lxc brutalne unprivileged+  by default je to lxc brutalne unprivileged 
 +   
 +  Waiting for UNMS to start 
 +  CONTAINER ID        IMAGE                     ​COMMAND ​                 CREATED ​            ​STATUS ​             PORTS                                            NAMES 
 +  51d5a2823aef ​       ubnt/​unms:​1.1.2 ​          "/​usr/​bin/​dumb-init …" ​  9 seconds ago       Up 6 seconds ​                                                        ​unms 
 +  320bb0c8b23e ​       ubnt/​unms-crm:​3.1.2 ​      "​make server_with_mi…" ​  10 seconds ago      Up 8 seconds ​       80-81/tcp, 443/tcp, 9000/tcp, 2055/​udp ​          ​ucrm 
 +  c8ced9596c84 ​       ubnt/​unms-netflow:​1.1.2 ​  "/​usr/​bin/​dumb-init …" ​  11 seconds ago      Up 7 seconds ​       0.0.0.0:​2055->​2055/​udp ​                          ​unms-netflow 
 +  27b9c3344742 ​       redis:​5.0.5-alpine ​       "​docker-entrypoint.s…" ​  15 seconds ago      Up 11 seconds ​                                                       unms-redis 
 +  1f1fd4ad8b11 ​       ubnt/​unms-nginx:​1.1.2 ​    "/​entrypoint.sh ngin…" ​  15 seconds ago      Up 10 seconds ​      ​0.0.0.0:​80-81->​80-81/​tcp,​ 0.0.0.0:​443->​443/​tcp ​  ​unms-nginx 
 +  dcbc960d019f ​       postgres:​9.6.12-alpine ​   "​docker-entrypoint.s…" ​  15 seconds ago      Up 12 seconds ​                                                       unms-postgres 
 +  1ac50d102245 ​       rabbitmq:​3.7.14-alpine ​   "​docker-entrypoint.s…" ​  15 seconds ago      Up 13 seconds ​                                                       unms-rabbitmq 
 +  a34be8e22abe ​       ubnt/​unms-fluentd:​1.1.2 ​  "/​entrypoint.sh /​bin…" ​  17 seconds ago      Up 15 seconds ​      ​5140/​tcp,​ 127.0.0.1:​24224->​24224/​tcp ​            ​unms-fluentd 
 +  UNMS is running 
 + 
 +====== Proxmox cluster ====== 
 +==== Prodloužení timeoutu na 10 sekund ==== 
 + 
 + 
 +dle https://​www.thegeekdiary.com/​how-to-change-pacemaker-cluster-heartbeat-timeout-in-centos-rhel-7/​ jsem dal do /​etc/​pve/​corosync.conf "​token:​ 9500" a to by mohlo prodlouzit quorum timeout na 10sekund 
 + 
 +=== proxmox 7 centos 7 === 
 +With old systemd there is no console and network in centos and other systems in proxmox 7. Solutin is to install newer version of systemd. 
 + 
 +Workaround proposed on https://​forum.proxmox.com/​threads/​pve-7-wont-start-centos-7-container.97834/​post-425419:​ 
 +  - gain access to the CT with pct enter <​CTID>​ 
 +  - enable the network with ifup eth0 
 +  - download this yum repository: https://​copr.fedorainfracloud.org/​coprs/​jsynacek/​systemd-backports-for-centos-7/​ 
 +  - curl -o /​etc/​yum.repos.d/​jsynacek-systemd-backports-for-centos-7-epel-7.repo https://​copr.fedorainfracloud.org/​coprs/​jsynacek/​systemd-backports-for-centos-7/​repo/​epel-7/​jsynacek-systemd-backports-for-centos-7-epel-7.repo 
 +  - issue yum update 
 +  - exit from the CT 
 +  - stop the CT with pct stop <​CTID>​ 
 +  - start the CT normally 
 + 
 +=== proxmox two node cluster === 
 +add to file /​etc/​pve/​corosync.conf 
 +  quorum { 
 +      provider: corosync_votequorum 
 +      two_node: 1 
 +      wait_for_all:​ 0 
 +  } 
  
Tisk/export