Istio allow incoming traffic to a service only from a particular namespace - istio

We want Istio to allow incoming traffic to a service only from a particular namespace. How can we do this with Istio? We are runnning Istio 1.1.3 version.

I'm not sure if this is possible for particular namespace, but it will work on labels.
You can create a network policy in Istio, this is nicely explained on Traffic Routing in Kubernetes via Istio and Envoy Proxy.
- from:
- podSelector:
zone: trusted
In the example only pods with label zone: trusted will be allowed to make incoming connection to the pod.
You can read about Using Network Policy with Istio.
I would also recommend reading Security Concepts in Istio as well as Denials and White/Black Listing.
Hope this helps You.


Application unable to communicate with port listening on same container when Istio is installed

I have a container running in a pod that runs several jars on different ports. Specifically it is running a java application and an Artemis server.
The application talks to the Artemis server via rpc.
All works fine until I install Istio and inject a sidecar. Just wondering if anyone has any ideas how Istio could be affecting communication within the container/pod.
If you have already injected sidecar into your Pod or a particular namespace, that means that so far Istio Envoy proxy will take care of traffic management actions, such as i.e. intercepting incoming and outgoing calls for the nested Kubernetes services. Therefore, communication between microservices will be under Istio Control plane responsibility as per Istio Architecture aspects.
Generally, Istio represents its own resources for traffic management purposes. These have to be used in order to establish secure connection to the targeted microservices:
There are four traffic management configuration resources in Istio:
VirtualService, DestinationRule, ServiceEntry, and Gateway:
A VirtualService defines the rules that control how requests for a service are routed within an Istio service mesh.
A DestinationRule configures the set of policies to be applied to a request after VirtualService routing has occurred.
A ServiceEntry is commonly used to enable requests to services outside of an Istio service mesh.
A Gateway configures a load balancer for HTTP/TCP traffic, most commonly operating at the edge of the mesh to enable
ingress traffic for an application.
I encourage you to learn more about Istio mesh features with some good Examples as well.

Endpoint Paths for APIs inside Docker and Kubernetes

I am newbie on Docker and Kubernetes. And now I am developing Restful APIs which later be deployed to Docker containers in a Kubernetes cluster.
How the path of the endpoints will be changed? I have heard that Docker-Swarm and Kubernetes add some ords on the endpoints.
The "path" part of the endpoint URLs themselves (for this SO question, the /questions/53008947/... part) won't change. But the rest of the URL might.
Docker publishes services at a TCP-port level (docker run -p option, Docker Compose ports: section) and doesn't look at what traffic is going over a port. If you have something like an Apache or nginx proxy as part of your stack that might change the HTTP-level path mappings, but you'd probably be aware of that in your environment.
Kubernetes works similarly, but there are more layers. A container runs in a Pod, and can publish some port out of the Pod. That's not used directly; instead, a Service refers to the Pod (by its labels) and republishes its ports, possibly on different port numbers. The Service has a DNS name service-name.namespace.svc.cluster.local that can be used within the cluster; you can also configure the Service to be reachable on a fixed TCP port on every node in the service (NodePort) or, if your Kubernetes is running on a public-cloud provider, to create a load balancer there (LoadBalancer). Again, all of this is strictly at the TCP level and doesn't affect HTTP paths.
There is one other Kubernetes piece, an Ingress controller, which acts as a declarative wrapper around the nginx proxy (or something else with similar functionality). That does operate at the HTTP level and could change paths.
The other corollary to this is that the URL to reach a service might be different in different environments: http://localhost:12345/path in a local development setup, http://other_service:8080/path in Docker Compose, http://other-service/path in Kubernetes, in production. You need some way to make that configurable (often an environment variable).

Exposing Istio Ingress Gateway as NodePort to GKE and run health check

I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?
Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."
The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.

rabbitmq openshift cluster

I setup rabbitmq cluster on my openshift successfully.
However I don't find the way to expose amqp (5672) or amqps (5671) ports with openshift routes.
I saw on openshift documentation that is not supported.
Routers support the following protocols:
HTTPS (with SNI)
TLS with SNI
WebSocket traffic uses the same route conventions and supports the same TLS termination types as other traffic.
what is the best way for doing this?
please find my setup.
oc version
oc v1.4.1
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO
openshift v1.4.1
kubernetes v1.4.0+776c994
POD router used : openshift/origin-haproxy-router:v1.4.1
You have a number of options. See:

Migrating on-premise applications/services to CloudHub

We have applications/services running on Mule on-premise, now we want to migrate all of them to CloudHub, are there any specific steps/considerations/limitations that needs to be followed for the success of this migration?
We want to keep the services as-is on the cloud as they are running on-premise.
Any help would be really appreciated.
One of the most important parts is that each app on Cloudhub is deployed seperately.
This means that each app will be 'isolated' in it's own container and needs to have at least 0.1 vCore.
So let's say you'll have 20 apps, make sure you have 2 vCores available minimum.
The advantage is here, that different apps can run on different runtimes.
This URL will point you in some directions in terms of HTTP connectivity:
This part is important for port routing:
Important: On the Mule worker, the CloudHub load balancer proxies port
:80 to :8081 for HTTP and proxies port :443 to :8082 for HTTPS. The
http.port value must be set to port 8081 for HTTP, and the https.port
value must be set to port 8082 for HTTPS. No other port numbers are
Of course you need to think about a lot more, eg.:
Are you using file inbound/outbound endpoints that write to a local system?
This is not possible because you won't have your file system, change to a cloud solution or SFTP/FTP.
Are you connecting to on-premise systems (probably yes)?
Figure out connectivity issues, firewall, VPC, etc.
VPC info: