-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seeking simple example: scrape only logs in all running containers in any of 2 directories. See how ChatGPT answered. #984
Comments
Hey, this config is unfortunately far from being correct 😅
Those missing files you can include in otel collector with |
Thank you. For those extra file logs, does it assume that those log locations are PVCs/PVs or can they be ephemeral storage in the pod. |
It doesn't know any better. The logs must be available to the process, how they're mounted is not visible to the process. |
Hey on this thread, similar to what you stated above about excluding logs, is this valid for finding logs (the included logs are extraVolumeMounts: with a mountPath: /var/jenkins_home/logs
|
|
Hey @gsmigelski,
already? If so, the way to gather all logs starting with
|
Ok yeah that's close. But the working code, since my target logs are associated with the pvc on the extraVolumeMount I needed to reference that here. That's a pretty important link. Here is a valid config with the reference to the mount named "filelog/jenkins-logs-volume" agent: Now i need to put exclude in without breaking what is working. This shouldn't confuse the receivers ..? agent: |
|
Lol chatgpt 🙄. Still safe from the robots for now, I guess... Fyi theres an easy way to disable the default log pipeline here:
That should disable the default container logging pipeline without needing to exclude. Isn't a daemonset a bit overkill to only pick up logs from a jenkins deploy? Could we not deploy a sidecar along with your jenkins deploy instead? |
In this example I am including a mountPath /splunk_logs which contains several mule*.log(s) however OTel is only monitoring the first file it finds (mule-domain-default.log What is missing to make this actually "watch" all files in this path? I was expecting to see multiple "watch" log entries as there are 4 logs in this "include:" path example. extraVolumes: 2023-11-14T21:49:42.543638261Z stderr F 2023-11-14T21:49:42.543Z info fileconsumer/file.go:194 Started watching file {"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/splunk_logs/starter-project-template/mule-domain-default.log"} |
yes it is. How do we disable the daemonset and have it only pickup the logs from the mounted volume, that's already out there in an NFS with a PV/PVC mounted on top of it (e.g. with extraVolumes / extraVolumeMounts) purely so that otel to scan those volumes? |
@gsmigelski what are the names of files you want to include? they all start with @hectoralicea there's no way to disable daemonset as far as I know, this is a collector for Kubernetes, so we assume users want to collect logs from kubernetes pods. I think what @matthewmodestino meant was to use pure splunk otel collector and configure it it consume the files you want, for example: https://github.com/signalfx/splunk-otel-collector/blob/main/examples/otel-logs-splunk/otel-collector-config.yml |
Hey Olga can you confirm what you mentioned about daemonsets? We were looking to reduce the number of receivers to 2 per cluster (regardless of the number of nodes). I was setting this value to 2 in a deployment yaml as well as the forwarding params. In this configuration we are only looking to collect logs from an EFS that is shared among pods with different mount points. My question is are the daemonsets required so that the receivers can consume the log path to my PV/EFS? |
@gsmigelski the helm chart and daemonsets are meant for the use case of collecting from the entire cluster. If your use case only calls for a single pod or two, you don't need the helm chart. You can just use the daemonset or even cluster receiver deployment as inspiration and deploy your own deployment of the otel collector. The helm chart doesnt cover the use case of single deployments for log collection, or sidecars, but provides a good starting point to grab the configmaps so you can deploy your own. Otherwise you will need to consult with someone with experience on how to manually deploy the otel collector for your use case. |
@omrozowicz-splunk , this is a good example, but how to we specify the Volumes (e.g. PV, PVCs) that we want to mount onto the splunk otel collector to have it consume files in its container mounted volumes. These PV/PVCs are mounted on top of an NFS partition which contain the logs we wish to consume. |
@hectoralicea you can use the helm chart manifests to look at how the daemonset does it and copy that into your single collector deployment. You are basically just mounting volumes and mount points like any other k8s pod. Heres an example that shows how the helm chart renders the volume mounts for custom file paths..basically the same as what your doing: |
the problem with daemonsets is that we could have up to 40 nodes running, when all the logs are sitting in a single NFS file system where we can create a single PV/PVC to mount it into any container. When 40 nodes, It appears that it launches 40 daemonsets one for each node. Is that really necessary? |
@hectoralicea no its not and im not saying to use a daemonset. Im saying use its config as inspiration to answer your question. Whether ds or deploy doesnt matter. Both would have to mount the pvc into the otel pod then configure otel to read the logs from there. Your use case is basically a "sidecar" or a single pod deploy. But the helm chart configmaps have the most advanced examples you are going to find and should help you to configure a deployment. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. If this issue is still relevant, please leave a comment explaining why it is still relevant. Otherwise, please close it. |
Using the splunk-otel-collector-chart helm chart whats an example values.yaml might look like which would tell the otel collector to scrape only logs in all running containers in the following directories:
/var/jenkins_home/logs
and/opt/applcation/logs
We dont want to push anything else into splunk other than whats in these two directories
/var/jenkins_home/logs
and/opt/applcation/logs
Right now I'm pushing a ton of data other than what's in these two logs. These logs are not showing up.
BTW; I asked chat gpt this question and it gave the following answer. Not sure how close this is, but I suspect its wrong because I cannot find the key
select_resources
anywhere in the documentationThe text was updated successfully, but these errors were encountered: