Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with Painless scripted field 'doc['flow_id'].value'. #1

Open
myrsecurity opened this issue Apr 29, 2020 · 69 comments
Open

Error with Painless scripted field 'doc['flow_id'].value'. #1

myrsecurity opened this issue Apr 29, 2020 · 69 comments

Comments

@myrsecurity
Copy link

Hi Ive tried to import the dashboards following the method

Request to Elasticsearch failed: {"error":{"root_cause":[{"type":"script_exception","reason":"runtime error","script_stack":["org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:94)","org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:41)","doc['flow_id'].value"," ^---- HERE"],"script":"doc['flow_id'].value","lang":"painless"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"logstash-2020.04.29-000001","node":"RmOnDn2mSsWSKkNKg2bgsA","reason":{"type":"script_exception","reason":"runtime error","script_stack":["org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:94)","org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:41)","doc['flow_id'].value"," ^---- HERE"],"script":"doc['flow_id'].value","lang":"painless","caused_by":{"type":"illegal_argument_exception","reason":"No field found for [flow_id] in mapping with types []"}}}]},"status":400}

Im reading from a Remote PFSENSE via Filebeats. The logs hit Elastic after all of the filtering etc..

image

Thank you

@pevma
Copy link
Member

pevma commented Apr 30, 2020

How do you import the dashboards exactly ?

@alphaDev23
Copy link

I'm receiving the same script exception. Dashboards, etc. are imported via the curl commands provided on the README page. The issue is preventing events in the EventsList from being displayed. I'm using the logstash filter that is linked to off the README page. The following is further information from the SN-ALL dashboard. Please advise.

script_exception at shard 0index logstash-flow-2020.11.22node VURsDiwmTnyNCTmjTmpqmQ
Type
script_exception
Reason
runtime error
Script stack
org.elasticsearch.index.fielddata.ScriptDocValues$Longs.get(ScriptDocValues.java:121)
org.elasticsearch.index.fielddata.ScriptDocValues$Longs.getValue(ScriptDocValues.java:115)
'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase()
^---- HERE

Script
'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase()

Lang
painless
Position offset
73
Position start
0
Position end
232
Caused by type
illegal_state_exception
Caused by reason
A document doesn't have a value for a field! Use doc[].size()==0 to check if a document is missing a field!

@pevma
Copy link
Member

pevma commented Nov 22, 2020

Was able to reproduce. Will try to cook a patch today. I think it is related to a possible fix here- StamusNetworks/SELKS#255 (comment)

I would like to confirm - on which dahsboars/vizs does this appear ?

@alphaDev23
Copy link

I only have Elasticsearch indexes for: alert, fileinfo, flow, http, tls. The issue is only appearing on SN-ALERTS from the data I have.

As a note, I attempted to use Filebeat to send Suricata logs directly to Elasticsearch using the elasticsearch7-template.json provided template. I verified the template was loaded in Elasticsearch. However, I believe my filebeat.yml file was incorrectly configured because I was only able to get a logstash- index, by modifying 'output.elasticsearch.index' and nothing was displayed in the dashboards. I'm not a Filebeat expert. If you have a filebeat.yml that works with the the template, it will eliminate the logstash service from the solution.

@pevma
Copy link
Member

pevma commented Nov 24, 2020

Were the indexes created/existed in Kibana/Management ?

@alphaDev23
Copy link

The indexes were created through the logstash template provided off the README page. It is a slight modification given that 'type' doesn't exist in 7.x. The indexes did not exist prior to instantiating the stack.

@pevma
Copy link
Member

pevma commented Nov 24, 2020

Ok - just to confirm , the issue appears only on SN-ALL or on SN-ALERTS, from the error it comes in from the logstash-flow... index which is not used i think in SN-ALERTS.

@alphaDev23
Copy link

I made a mistake in my last comment. It is only appearing on SN-ALL. I do not have any data in SN-ALERTS so I'm not able to confirm whether it occurs in SN-ALERTS.

@alphaDev23
Copy link

Any update on the above?

@pevma
Copy link
Member

pevma commented Nov 30, 2020

This patch fixes the issue as mentioned here - #1 (comment)
It is either you can patch it up manually on each scripted field for each index - aka for example logstash-alert* / logstash-http* etc in Kibana Management .
Or it should also be taken care of on the next dashboards release, planned this week.
Apologies for the delay !

@alphaDev23
Copy link

No worries. Thank you for fixing. Fantastic work on these dashboards, btw!

@ManuelFFF
Copy link

ManuelFFF commented Dec 1, 2020

*Running SELKS 6 + ELK 7.10.0 + X-Pack enabled, so all communications are via https

I am having the same issue. So, the solution is just to enable the "community_id" in Suricata config and restart Suricata, or do I need to perform more steps?

Should I use doc['community_id.keyword'].value or doc['community_id'].value?

Thank you

@pevma
Copy link
Member

pevma commented Dec 1, 2020

It does not seem the issue is related?
For enabling the community id - yest it just needs to be enabled and suricata restarted.

@ManuelFFF
Copy link

ManuelFFF commented Dec 1, 2020

Hi @pevma,

Like I said, I am experiencing the same issue. When I open Discover in Kibana, there's always a pop-up warning stating there is an issue with 2/15 shards. Please see the screenshots below:

Shard error
Shard error 2
Shard error 3

This issue starts as soon I enable X-Pack and all the communications turned over https protocol. We have talked about this matter and some side effects this brings to SELKS suite in other posts. I was hopping that a new SELKS release or patch would fix this and other issues, that just appears if the user enables X-Pack with basic security features in ELK. Then I saw this post and I thought that maybe there is an easy way to address this issue, since other users have seen the same error.

I tried enabling the community_id in Suricata config, then restarted Suricata and Evebox. The issue do not disappear, just mutate into a different error, as you can see here:
No community_id field

It does not make any difference if I add or leave the .keyword. Maybe I am missing additional important steps.
I hope you can help me to make this error go away.

Thank you

@ManuelFFF
Copy link

Any advise?

@pevma
Copy link
Member

pevma commented Dec 3, 2020

Think you should use it without the .keyword
Before that you should make sure you see it properly in the json logs (eve.json) - there should be a community flow id key/record in the logs.

@ManuelFFF
Copy link

Hi,

I only tried the .keyword because of this comment StamusNetworks/SELKS#255 (comment), but even that did not resolve the issue.

Checking the eve.json logs I can see flow_id field and also the community_id field:

{"timestamp":"2020-12-04T08:50:26.651146-0500","flow_id":1308048361440886,"in_iface":"enp2s0","event_type":"flow","src_ip":"192.168.1.128","src_port":58589,"dest_ip":"239.255.255.250","dest_port":3702,"proto":"UDP","app_proto":"failed","flow":{"pkts_toserver":7,"pkts_toclient":0,"bytes_toserver":4886,"bytes_toclient":0,"start":"2020-12-04T08:47:26.378486-0500","end":"2020-12-04T08:47:33.171907-0500","age":7,"state":"new","reason":"unknown","alerted":false},"community_id":"1:JJD9J+CckkTq2iKzZP6j8zVZjNY="}
{"timestamp":"2020-12-04T08:50:26.651523-0500","flow_id":1308048361440886,"in_iface":"enp2s0","event_type":"flow","src_ip":"192.168.1.128","src_port":58589,"dest_ip":"239.255.255.250","dest_port":3702,"proto":"UDP","app_proto":"failed","flow":{"pkts_toserver":7,"pkts_toclient":0,"bytes_toserver":4886,"bytes_toclient":0,"start":"2020-12-04T08:47:26.378486-0500","end":"2020-12-04T08:47:33.171907-0500","age":7,"state":"new","reason":"unknown","alerted":false},"community_id":"1:JJD9J+CckkTq2iKzZP6j8zVZjNY="}
{"timestamp":"2020-12-04T08:50:27.318169-0500","flow_id":2012176036617619,"in_iface":"enp2s0","event_type":"flow","src_ip":"192.168.1.179","src_port":50754,"dest_ip":"224.0.0.252","dest_port":5355,"proto":"UDP","app_proto":"failed","flow":{"pkts_toserver":2,"pkts_toclient":0,"bytes_toserver":150,"bytes_toclient":0,"start":"2020-12-04T08:47:15.613779-0500","end":"2020-12-04T08:47:16.020953-0500","age":1,"state":"new","reason":"unknown","alerted":false},"community_id":"1:eR0XiX1AMxyOvQcJd8kGHF+YIzY="}
{"timestamp":"2020-12-04T08:50:27.318319-0500","flow_id":2012176036617619,"in_iface":"enp2s0","event_type":"flow","src_ip":"192.168.1.179","src_port":50754,"dest_ip":"224.0.0.252","dest_port":5355,"proto":"UDP","app_proto":"failed","flow":{"pkts_toserver":2,"pkts_toclient":0,"bytes_toserver":150,"bytes_toclient":0,"start":"2020-12-04T08:47:15.613779-0500","end":"2020-12-04T08:47:16.020953-0500","age":1,"state":"new","reason":"unknown","alerted":false},"community_id":"1:eR0XiX1AMxyOvQcJd8kGHF+YIzY="}

The above logs are from a fresh SELKS 6 install and up to date, including ELK 7.10.0. I have not enabled the community_id field in suricata.yaml, but field is enabled in SELKS custom config file that overrides Suricata basic config (/etc/suricata/selks6-addin.yaml). So, the eve.json logs is including both fields: flow_id and community_id, and yet getting the shard errors related to the flow_id.

What would you recommend me to check/try next?

Thank you

@pevma
Copy link
Member

pevma commented Dec 5, 2020

Where exactly are you making the change/addition in the scripted fields - is it in logstash-flow* index in Kibana management ?
And on what discovery/viz you exactly get the error ?

@ManuelFFF
Copy link

ManuelFFF commented Dec 7, 2020

Hi,

Error appears when I check app Discover/logstash-*. Error it is NOT present if I check Discover/logstash-flow-*. I tried modifications on Index Patterns/logstash-*. Index Patterns/logstash-flow-* does not have a scripted field.

@pevma
Copy link
Member

pevma commented Dec 7, 2020

Ok - so you mean if you do discovery with the index logstahs-* ? What about if you try for example logstash-dns-* or logstash-http-*

@ManuelFFF
Copy link

ManuelFFF commented Dec 7, 2020

Verified one by one all logs in Discover/logstash-protocol-*. Only Discover/logstash-* it's being affected

@ManuelFFF
Copy link

Any thoughts?

@pevma
Copy link
Member

pevma commented Dec 9, 2020 via email

@ManuelFFF
Copy link

ManuelFFF commented Dec 9, 2020

Hi,

I am sorry if I wasn't clear enough on my previous message, so you could be able to help me. Index logstash-service-* does not really exist. I tried to use a pattern name to refer to all the following indexes:

logstash-*
logstash-alert-*
logstash-anomaly-*
logstash-dhcp-*
logstash-dnp3-*
logstash-dns-*
logstash-fileinfo-*
logstash-flow-*
logstash-http-*
logstash-ikev2-*
logstash-krb5-*
logstash-nfs-*
logstash-rdp-*
logstash-rfb-*
logstash-sip-*
logstash-smb-*
logstash-smtp-*
logstash-snmp-*
logstash-ssh-*
logstash-tftp-*
logstash-tls-*

Perhaps I should have used logstash-[event_type]-* instead or just use the exact index name like this time. What I wanted to say is that I checked all the previous indexes, one by one, and the error comes only when I check Discover/logstash-*

@pevma
Copy link
Member

pevma commented Dec 10, 2020

I think using logstash-event_type-* is better in terms of zooming in the specific index/event_type.
You can also look at any of the event types in their own dashboards including the raw events themselves at the bottom of every dashboard. So you just need to select the dashboard actually (From Kibana-> Dashboards) - for example SN-SMB will show you a dashboard with some visualizations and the raw logs of the event type SMB (or SMB protocol events).

@ManuelFFF
Copy link

So, there is no way to fix this error?
image

@pevma
Copy link
Member

pevma commented Dec 14, 2020

You should be able to import the raw API exports from here -
https://github.com/StamusNetworks/KTS7#how-to-use to fix the issue.

@alphaDev23
Copy link

Was this issue resolved in the master branch? I just pulled and I'm receiving the following:

script_exception at shard 0index logstash-flow-2020.12.23node n6KVwvteRyaKlBCWbQPACwTypescript_exceptionReasonruntime errorScript stackorg.elasticsearch.index.fielddata.ScriptDocValues$Longs.get(ScriptDocValues.java:121) org.elasticsearch.index.fielddata.ScriptDocValues$Longs.getValue(ScriptDocValues.java:115) 'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase() ^---- HEREScript'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase()LangpainlessPosition offset73Position start0Position end232Caused by typeillegal_state_exceptionCaused by reasonA document doesn't have a value for a field! Use doc[].size()==0 to check if a document is missing a field!

@pevma
Copy link
Member

pevma commented Dec 23, 2020 via email

@alphaDev23
Copy link

alphaDev23 commented Dec 23, 2020

I've recreated the entire ELK stack. Same issue. Please advise.

@pevma
Copy link
Member

pevma commented Dec 25, 2020 via email

@alphaDev23
Copy link

Is there a way to load via curl to resolve the issue? Manual loading of saved objects is less than ideal given that the stack can, and often is, be torn down and re-created. I have added the curl commands to a bootstrap in logstash where these are best located.

@pevma
Copy link
Member

pevma commented Dec 25, 2020 via email

@alphaDev23
Copy link

I attempted to import visualizations.ndjson...received:

Sorry, there was an error
The file could not be processed due to error: "Failed to fetch"

@pevma
Copy link
Member

pevma commented Dec 26, 2020 via email

@pevma
Copy link
Member

pevma commented Dec 26, 2020 via email

@alphaDev23
Copy link

Your suggestion appeared to resolve the issue but I'm now receiving errors in the at least the following dashboards:

HTTP
Could not locate that index-pattern-field (id: http.accept_encoding.keyword)
Could not locate that index-pattern-field (id: http.vary.keyword)

Alerts
Could not locate that index-pattern-field (id: vlan)
Could not locate that index-pattern-field (id: smtp.helo.keyword)

@pevma
Copy link
Member

pevma commented Jan 5, 2021

Maybe you dont have those logs/fileds for those visualizations ?
Can you share a record/log that has the fileds?

@alphaDev23
Copy link

It appears that all the logs are in the logstash-flow- indexes. Is this correct or is there an issue with templates, etc?

@pevma
Copy link
Member

pevma commented Jan 7, 2021 via email

@alphaDev23
Copy link

I do have http traffic.

@alphaDev23
Copy link

alphaDev23 commented Jan 14, 2021

Any thoughts on the above? There are no logstash-http-* indexes in ES which is expected I believe in order for the SN-HTTP dashboard to work correctly. I only have 2 ES indexes related to the dashboards, logstash-flow-* and logstash-*

Is there anything additional needed to be added to filebeat.yml (below)?

input_type: log
    enabled: true
    paths:
        /var/log/suricata/eve.json

output.elasticsearch:
hosts: ["<ES_IP>:9200"]

@pevma
Copy link
Member

pevma commented Jan 14, 2021

Have you made any changes to the ES template - i can not think of any other reason, it is either that or there is actually no such traffic?
If you tcpdump - would there be http traffic on the sniffing interface?

@alphaDev23
Copy link

I have not made any changes to the ES template. There is http traffic on the interface (on a router), internet facing, because I have a web server on the inside interface and I can connect to from an external IP.

@pevma
Copy link
Member

pevma commented Jan 19, 2021

Ok - (sorry did not understand) do you see that http traffic on the sniffing interface of SELKS with tcpdump ?

@alphaDev23
Copy link

The traffic is from internet -> router (running suricata and filebeat) -> ELK stack. That is, suricata is storing the logs in the eve.json file and filebeat is shipping the logs to the ELK stack. The logstash-flow and logsstash indexes are being created. The logstash-http indexes are not.

Note, this was not an issue in the 6.x version.

@pevma
Copy link
Member

pevma commented Jan 19, 2021

Do you see the http with tcpdump on the sniffing interface, just confirming ?

@alphaDev23
Copy link

alphaDev23 commented Jan 19, 2021

Yes, there is both http and https traffic in tcpdump. I assume that the logstash-http captures both http and https. The website is accessible externally on both http and https. The traffic is showing in the eve.json file (actual IP addresses replaced with vars), e.g.

{"timestamp":"2021-01-17T13:25:05.000930+0000","flow_id":1534816133408444,"event_type":"flow","src_ip":"<IP>","src_port":443,"dest_ip":"<IP2>","dest_port":60804,"proto":"TCP","flow":{"pkts_toserver":4,"pkts_toclient":0,"bytes_toserver":216,"bytes_toclient":0,"start":"2021-01-17T13:23:57.157372+0000","end":"2021-01-17T13:24:04.419029+0000","age":7,"state":"new","reason":"timeout","alerted":false},"tcp":{"tcp_flags":"00","tcp_flags_ts":"00","tcp_flags_tc":"00"}}

@pevma
Copy link
Member

pevma commented Jan 20, 2021

Ok, thank you for the update.

Do you have a recent "event_type":"http" event in the eve.json you can share?

@alphaDev23
Copy link

alphaDev23 commented Jan 20, 2021

Here are 2, IP and DOMAIN are substituted for actual:

{"timestamp":"2021-01-20T16:38:52.910511+0000","flow_id":1809817020410129,"in_iface":"eth0","event_type":"http","src_ip":"124.156.102.27","src_port":48470,"dest_ip":"<IP>","dest_port":80,"proto":"TCP","tx_id":0,"http":{"hostname":"","url":"/","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36","http_content_type":"text/html","http_method":"GET","protocol":"HTTP/1.1","status":302,"redirect":"https://<DOMAIN>/","length":220}}
{"timestamp":"2021-01-20T16:41:52.887344+0000","flow_id":1347197514852422,"in_iface":"eth0","event_type":"http","src_ip":"23.148.145.17","src_port":61745,"dest_ip":"<IP>","dest_port":443,"proto":"TCP","tx_id":0,"http":{"hostname":"","http_port":443,"url":"/","http_content_type":"text/html","http_method":"GET","protocol":"HTTP/1.1","status":400,"length":362}}

@pevma
Copy link
Member

pevma commented Jan 20, 2021

ok , thank you for confirming.
Can you check the Kibana management if you have the indexes created for the different protocols in that case?

@alphaDev23
Copy link

Yes, in Kibana all the indexes are created with different protocols including logstash-http-*

@pevma
Copy link
Member

pevma commented Jan 20, 2021

Ok so that seems correct.
So if you search for those logs above (you can simply search on the flow id 1809817020410129) you would find the logs in which dashboard?

@alphaDev23
Copy link

These are not in any dashboard. The 'SN-HTTP' dashboard has a count of 0.

@pevma
Copy link
Member

pevma commented Jan 22, 2021

Can you show the output of
ls -lh /var/log/suricata/ - the owner should be the user logstash for eve.json. Is that so?

@alphaDev23
Copy link

There is no logstash user on the router; Logstash is on a separate server. Filebeat is sending logs from eve.json to Elasticsearch.

The issue is occurring because there are no logstash-http-* indexes being created. This differs from what was occupying in V6 of the dashboards. I've modified filebeat.yml to the what is below but it is still not working; I believe because filebeat is not identifying "event_type" in the 'output.elasticsearch.index'. Note, the addition of the variables 'setup.template.', with the exception of 'setup.template.json.', is due to a filebeat bug. These should not be needed absent the bug. Thoughts on how filebeat can send the correct index to elasticsearch by using the the 'event_type' field.

filebeat.inputs:

  • input_type: log
    enabled: true
    paths:
    • /var/log/suricata/eve.json

output.elasticsearch:
hosts: ["<ELASTICSEARCH_DOMAIN_NAME>:9200"]
index: "logstash-%{[event_type]}-%{+yyyy.MM.dd}"

setup.template:
enabled: true
name: logstash
pattern: no-name-*
overwrite: true
json:
enabled: true
path: "/etc/filebeat/elasticsearch7-template.json"

@pevma
Copy link
Member

pevma commented Jan 25, 2021

Do you have default SELKS or you have made customizations?

@alphaDev23
Copy link

I'm not using SELKS in the architecture. The router is running suricata and filebeat which sends the logs an ELK stack. Actually, in this case, Logstash is not needed since the filebeat logs are going direct to Elasticsearch.

@pevma
Copy link
Member

pevma commented Jan 26, 2021

I was not aware of that - that you are not using selks but a custom set up - the troubleshooting will be totally different in that case.You should look at the logstash template for SELKS and use a similar approach.

@keatonLiu
Copy link

keatonLiu commented Apr 13, 2022

I had the same issue, but I fixed it by changing the index pattern script code of the EveBox in logstash-alert-*. It simply because the script does not check the missing field or missing value.
if(doc.containsKey('flow_id') && doc['flow_id'].size()>0){ return doc['flow_id'].value }
Then it works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants