当前位置: 代码迷 >> 综合 >> docker publish 端口和iptables 问题
  详细解决方案

docker publish 端口和iptables 问题

热度:94   发布时间:2024-02-28 05:29:41.0

docker publish port 可以发布端口。

docker run -p 8080:80 xxxxxxx

这样 其他机器可以,可以通过宿主机器的ip:8080 访问docker 容器的80端口。 也就是说 docker 通过-p 实现了端口转发的功能。
这个时候查看iptables情况

iptables -L -nChain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.2           tcp dpt:15672
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.2           tcp dpt:5672
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.4           tcp dpt:3306
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.5           tcp dpt:6000
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.3           tcp dpt:8848##开放的白名单端口
Chain IN_public_allow (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:22 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:9966 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:12306 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:443 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:9977 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:5100 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8089 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8000 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8010 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8090 ctstate NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:53306 ctstate NEWChain IN_public_deny (1 references)
target     prot opt source               destination         Chain IN_public_log (1 references)
target     prot opt source               destination         Chain OUTPUT_direct (1 references)
target     prot opt source               destination      

查看iptables docker 转发情况

iptables -t nat  -nvL DOCKER
Chain DOCKER (2 references)
pkts bytes target     prot opt in     out     source               destination         20  1200 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:15672 to:172.17.0.2:156720     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5672 to:172.17.0.2:56720     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:53306 to:172.17.0.4:33060     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:6000 to:172.17.0.5:600010   520 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8848 to:172.17.0.3:8848

可以发现 只要是in 符合!docker0网络环境,就可以进行nat转发

而在默认情况下, docker 的容器都是通过docker0 网桥相互访问如下图。
在这里插入图片描述

举个场景:

目前有一个nacos 服务, 和mysql 服务。

mysql docker run 预计

docker run  -d \--privileged=true \--restart=always  \-e MYSQL_ROOT_PASSWORD=root \-p ${port}:3306 \-v /home/mysql/data:/var/lib/mysql \-v ${PWD}/conf:/etc/mysql/conf.d \-v ${PWD}/log:/var/log/mysql \-v /etc/localtime:/etc/localtime \--name ${name}  smc-mysql:5.7

根据端口查询mysql的ip, 172.17.0.5 就是mysql的IP , nacos ip是172.17.0.2

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.2           tcp dpt:8848
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.3           tcp dpt:6000
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.4           tcp dpt:15672
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.4           tcp dpt:5672
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.5           tcp dpt:3306

本机的ip 10.172.15.195

           ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
2: enp61s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq portid 00bed50537e0 state UP qlen 1000link/ether 00:be:d5:05:37:e0 brd ff:ff:ff:ff:ff:ffinet 10.172.15.195/24 brd 10.172.15.255 scope global enp61s0f0valid_lft forever preferred_lft forever
3: enp61s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq portid 00bed50537e1 state DOWN qlen 1000link/ether 00:be:d5:05:37:e1 brd ff:ff:ff:ff:ff:ff
4: enp61s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq portid 00bed50537e2 state DOWN qlen 1000link/ether 00:be:d5:05:37:e2 brd ff:ff:ff:ff:ff:ff
5: enp61s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq portid 00bed50537e3 state DOWN qlen 1000link/ether 00:be:d5:05:37:e3 brd ff:ff:ff:ff:ff:ff
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UPlink/ether 02:42:57:c3:24:2a brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
234: vethc14661e@if233: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UPlink/ether 1a:d1:b0:cd:f8:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0
236: veth01be069@if235: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UPlink/ether b2:25:cb:98:56:bc brd ff:ff:ff:ff:ff:ff link-netnsid 1
238: veth5326f54@if237: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UPlink/ether 0e:db:a9:97:94:7c brd ff:ff:ff:ff:ff:ff link-netnsid 2
240: veth2fd6c88@if239: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UPlink/ether 66:11:0a:f9:34:09 brd ff:ff:ff:ff:ff:ff link-netnsid 3

这个时候本地ping nacos ip 和mysql Ip , 发现是通的。

[root@localhost mysql]# ping 172.17.0.5
PING 172.17.0.5 (172.17.0.5) 56(84) bytes of data.
64 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.144 ms
64 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.050 ms
^C
--- 172.17.0.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.050/0.097/0.144/0.047 ms
[root@localhost mysql]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.091 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.047 ms
^C
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.047/0.069/0.091/0.022 ms

但是实际上 nacos 的docker 容器访问mysql 却是失败的,出现类似下面错误

org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'configOpsController' defined in URL [jar:file:/home/nacos/target/nacos-server.jar!/BOOT-INF/lib/nacos-config-1.3.0.jar!/com/alibaba/nacos/config/server/controller/ConfigOpsController.class]: Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'externalDumpService': Invocation of init method failed; nested exception is ErrCode:500, ErrMsg:Nacos Server did not start because dumpservice bean construction failure :
No DataSource setat org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769)at  at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:136)... 40 common frames omitted
Caused by: java.lang.IllegalStateException: No DataSource setat org.springframework.util.Assert.state(Assert.java:73)at org.springframework.jdbc.support.JdbcAccessor.obtainDataSource(JdbcAccessor.java:77)at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:371)at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:452)at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:462)at org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:473)at org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:480)at com.alibaba.nacos.config.server.service.repository.ExternalStoragePersistServiceImpl.findConfigMaxId(ExternalStoragePersistServiceImpl.java:702)at com.alibaba.nacos.config.server.service.dump.DumpAllProcessor.process(DumpTask.java:198)at com.alibaba.nacos.config.server.service.dump.DumpService.dumpConfigInfo(DumpService.java:249)at com.alibaba.nacos.config.server.service.dump.DumpService.dumpOperate(DumpService.java:155)... 48 common frames omitted

接下来 打开mysql的防火墙端口

firewall-cmd --permanent --zone=public --add-port=53306/tcp
success
[root@localhost logs]# firewall-cmd --reload
success
[root@localhost logs]#

然后重新nacos docker run ,发现这个时候mysql 访问通了

2020-10-13 12:49:57,919 INFO Initializing Spring embedded WebApplicationContext2020-10-13 12:49:57,919 INFO Root WebApplicationContext: initialization completed in 5704 ms2020-10-13 12:49:58,507 INFO Use Mysql as the driver2020-10-13 12:49:58,632 INFO HikariPool-1 - Starting...2020-10-13 12:49:58,968 INFO HikariPool-1 - Start completed.2020-10-13 12:49:59,689 INFO Nacos-related cluster resource initialization2020-10-13 12:49:59,701 INFO The cluster resource is initialized2020-10-13 12:50:00,332 INFO Reflections took 124 ms to scan 1 urls, producing 6 keys and 24 values2020-10-13 12:50:00,394 INFO Reflections took 3 ms to scan 1 urls, producing 2 keys and 12 values2020-10-13 12:50:00,408 INFO Reflections took 5 ms to scan 1 urls, producing 3 keys and 15 values2020-10-13 12:50:01,704 INFO Initializing ExecutorService 'applicationTaskExecutor'2020-10-13 12:50:01,970 INFO Adding welcome page: class path resource [static/index.html]2020-10-13 12:50:02,767 INFO Creating filter chain: Ant [pattern='/**'], []2020-10-13 12:50:02,870 INFO Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@362a019c, org.springframework.security.web.context.SecurityContextPersistenceFilter@3af17be2, org.springframework.security.web.header.HeaderWriterFilter@65e61854, org.springframework.security.web.csrf.CsrfFilter@27a5328c, org.springframework.security.web.authentication.logout.LogoutFilter@303e3593, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@37f21974, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@224b4d61, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@1d9bec4d, org.springframework.security.web.session.SessionManagementFilter@4fcee388, org.springframework.security.web.access.ExceptionTranslationFilter@6c345c5f]2020-10-13 12:50:03,153 INFO Exposing 2 endpoint(s) beneath base path '/actuator'2020-10-13 12:50:03,192 INFO Initializing ExecutorService 'taskScheduler'2020-10-13 12:50:03,650 INFO Tomcat started on port(s): 8848 (http) with context path '/nacos'2020-10-13 12:50:03,657 INFO Started Nacos in 13.31 seconds (JVM running for 14.129)2020-10-13 12:50:03,657 INFO Nacos Log files: /home/nacos/logs2020-10-13 12:50:03,660 INFO Nacos Log files: /home/nacos/conf2020-10-13 12:50:03,660 INFO Nacos Log files: /home/nacos/data2020-10-13 12:50:03,660 INFO Nacos started successfully in stand alone mode. use external storage2020-10-13 12:50:04,567 INFO Initializing Spring DispatcherServlet 'dispatcherServlet'2020-10-13 12:50:04,567 INFO Initializing Servlet 'dispatcherServlet'2020-10-13 12:50:04,593 INFO Completed initialization in 26 ms

这个原因就是 docker 防火墙 nat 来源是非docker网络允许转发, docker 网络不允许转发,需要添加防火墙白名单。
个人认为, 这个应该属于docker iptables 网络这块设计不合理导致的。
不过docker 提供了关闭iptables的功能。

解决措施,关闭docker iptables 功能

在/etc/docker/daemon.json 添加,“iptables”:false 关闭防火墙即可。

"iptables":false

关闭防火墙之后,docker publish的端口,必须要要在iptables 白名单中管理,才能被外部访问, 这样符合常规的认知

  相关解决方案