Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-44042

jnlp slave ConnectionRefusalException None of the protocols were accepted

    Details

    • Similar Issues:

      Description

      Hi, I'm running kubernetes locally on my mac via minikube to test out using the kubernetes plugin. I seem to be running into an issue when the jnlp container attempts to connect back to the jenkins master, it seems to discover it fine, but then the fails with "None of the protocols were accepted". I also tested if there was connectivity on port the jnlp port 50000 which is open so I'm some what a loss what the issue could be.

      I've attached the the logs and dashboard/env var information for the jnlp container, and the jenkins logs.

      I'm pretty new to kubernetes and the remoting part of jenkins but these line stick out to me

       

      INFO: Accepted connection #15 from /172.17.0.1:39604
      May 04, 2017 1:25:31 AM org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer onRecv
      INFO: [JNLP4-connect connection from /172.17.0.1:39604] Refusing headers from remote: Unknown client name: kubernetes-8bc92b750f5b4427a1d25963f95474e4-6673e16106d

       

      Thanks for any insight you can provide. If there's anything else I can provide just let me know.

        Attachments

        1. container-logs.txt
          8 kB
        2. dashboard.txt
          0.6 kB
        3. jenkins.log
          7 kB
        4. kubernetes.log
          9 kB

          Issue Links

            Activity

            Hide
            csanchez Carlos Sanchez added a comment -

            Jae Gangemi you are probably experiencing JENKINS-45910

            Show
            csanchez Carlos Sanchez added a comment - Jae Gangemi you are probably experiencing JENKINS-45910
            Hide
            larslawoko Lars Lawoko added a comment -

            Just confirming that we saw this behaviour too, and Carlos Sanchez suggestion, that another failing container triggers this, was correct

            Show
            larslawoko Lars Lawoko added a comment - Just confirming that we saw this behaviour too, and Carlos Sanchez suggestion, that another failing container triggers this, was correct
            Hide
            patrickyyao Yuan Yao added a comment - - edited

            We are having a very similar problem too. What we found out is that, when there are multiple builds in the queue with different labels, the plugin will try to provision a few slaves at the same time ( due to the non-blocking nature of provision method in KubernetesCloud class )

            For example, If I have 5 builds waiting in the queue, it will try to provision 5 slaves simultaneously, even my container cap is set to 3. Exceeding the cap is a minor problem though. The real problem is, these 5 slaves will soon be dead because of this "None of the protocols were accepted" error.

            I ended up adding a sleep in the provision method: https://github.com/faraway/kubernetes-plugin/commit/364abb99ccb6defc539b90882d1854305e39d01a and had a custom build(So that it only provision one slave at a time). And things started to working great for us for the past few weeks. 

            Despite of this lame fix, my question is wether this is something I did wrong ( Maybe I should use same label for all the repos and all the builds ? ). If this suppose to work then I'm interested to fix it, any directions or instructions from the maintainers will really help me.

            My apologies that it seems I'm hijacking this ticket, it's just the errors look exactly the same. Please advise if I should better create a new ticket. 

             

             

            Show
            patrickyyao Yuan Yao added a comment - - edited We are having a very similar problem too. What we found out is that, when there are multiple builds in the queue with different labels, the plugin will try to provision a few slaves at the same time ( due to the non-blocking nature of provision  method in KubernetesCloud class ) For example, If I have 5 builds waiting in the queue, it will try to provision 5 slaves simultaneously, even my container cap is set to 3. Exceeding the cap is a minor problem though. The real problem is, these 5 slaves will soon be dead because of this "None of the protocols were accepted" error. I ended up adding a sleep in the provision  method: https://github.com/faraway/kubernetes-plugin/commit/364abb99ccb6defc539b90882d1854305e39d01a  and had a custom build(So that it only provision one slave at a time). And things started to working great for us for the past few weeks.  Despite of this lame fix, my question is wether this is something I did wrong ( Maybe I should use same label for all the repos and all the builds ? ). If this suppose to work then I'm interested to fix it, any directions or instructions from the maintainers will really help me. My apologies that it seems I'm hijacking this ticket, it's just the errors look exactly the same. Please advise if I should better create a new ticket.     
            Hide
            csanchez Carlos Sanchez added a comment -

            you should open a new ticket if you see that concurrent provisioning is failing, but for that you need to provide the logs https://github.com/jenkinsci/kubernetes-plugin#debugging to be able to diagnose the issue

            Show
            csanchez Carlos Sanchez added a comment - you should open a new ticket if you see that concurrent provisioning is failing, but for that you need to provide the logs https://github.com/jenkinsci/kubernetes-plugin#debugging to be able to diagnose the issue
            Hide
            tenstriker Nirav Patel added a comment - - edited

            fyi, i saw this issue when my jenkins hosts' (my mac) IP changed. I updated kubernetes cloud config's jenkins host and tunnel host ip and that fixed it for me. It also happens if IPs are pointing to some other jenkins host from which you haven't submitted original job.

            Show
            tenstriker Nirav Patel added a comment - - edited fyi, i saw this issue when my jenkins hosts' (my mac) IP changed. I updated kubernetes cloud config's jenkins host and tunnel host ip and that fixed it for me. It also happens if IPs are pointing to some other jenkins host from which you haven't submitted original job.

              People

              • Assignee:
                csanchez Carlos Sanchez
                Reporter:
                sgarlick987 Stephen Garlick
              • Votes:
                2 Vote for this issue
                Watchers:
                12 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: