You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Often when using the crawler, I want to find distinct instances of, eg, some specific block or template. However, when said block or template is used on a page type with many results (like the blog), the output is dominated by them and my use case suffers.
Currently, I get around this by downloading a csv and grep -ving, which is easy enough, but it would be cool to have an option in the crawler itself to exclude pages matching a provided url pattern (or comma separated list of patterns)
The text was updated successfully, but these errors were encountered:
Often when using the crawler, I want to find distinct instances of, eg, some specific block or template. However, when said block or template is used on a page type with many results (like the blog), the output is dominated by them and my use case suffers.
Currently, I get around this by downloading a csv and
grep -v
ing, which is easy enough, but it would be cool to have an option in the crawler itself to exclude pages matching a provided url pattern (or comma separated list of patterns)The text was updated successfully, but these errors were encountered: