diff --git a/docs/changelog/118599.yaml b/docs/changelog/118599.yaml new file mode 100644 index 0000000000000..b410ddf5c5d19 --- /dev/null +++ b/docs/changelog/118599.yaml @@ -0,0 +1,5 @@ +pr: 118599 +summary: Archive-Index upgrade compatibility +area: Search +type: enhancement +issues: [] diff --git a/docs/changelog/118959.yaml b/docs/changelog/118959.yaml new file mode 100644 index 0000000000000..95a9c146ae672 --- /dev/null +++ b/docs/changelog/118959.yaml @@ -0,0 +1,5 @@ +pr: 118959 +summary: Allow kibana_system user to manage .reindexed-v8-internal.alerts indices +area: Authorization +type: enhancement +issues: [] diff --git a/docs/reference/data-management.asciidoc b/docs/reference/data-management.asciidoc index 4245227a1524d..7ef021dc6370b 100644 --- a/docs/reference/data-management.asciidoc +++ b/docs/reference/data-management.asciidoc @@ -6,29 +6,26 @@ -- The data you store in {es} generally falls into one of two categories: -* Content: a collection of items you want to search, such as a catalog of products -* Time series data: a stream of continuously-generated timestamped data, such as log entries - -Content might be frequently updated, +* *Content*: a collection of items you want to search, such as a catalog of products +* *Time series data*: a stream of continuously-generated timestamped data, such as log entries +*Content* might be frequently updated, but the value of the content remains relatively constant over time. You want to be able to retrieve items quickly regardless of how old they are. -Time series data keeps accumulating over time, so you need strategies for +*Time series data* keeps accumulating over time, so you need strategies for balancing the value of the data against the cost of storing it. As it ages, it tends to become less important and less-frequently accessed, so you can move it to less expensive, less performant hardware. For your oldest data, what matters is that you have access to the data. It's ok if queries take longer to complete. -To help you manage your data, {es} offers you: - -* <> ({ilm-init}) to manage both indices and data streams and it is fully customisable, and -* <> which is the built-in lifecycle of data streams and addresses the most -common lifecycle management needs. +To help you manage your data, {es} offers you the following options: -preview::["The built-in data stream lifecycle is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but this feature is not subject to the support SLA of official GA features."] +* <> +* <> +* {curator-ref-current}/about.html[Elastic Curator] -**{ilm-init}** can be used to manage both indices and data streams and it allows you to: +**{ilm-init}** can be used to manage both indices and data streams. It allows you to do the following: * Define the retention period of your data. The retention period is the minimum time your data will be stored in {es}. Data older than this period can be deleted by {es}. @@ -38,12 +35,24 @@ Data older than this period can be deleted by {es}. for your older indices while reducing operating costs and maintaining search performance. * Perform <> of data stored on less-performant hardware. -**Data stream lifecycle** is less feature rich but is focused on simplicity, so it allows you to easily: +**Data stream lifecycle** is less feature rich but is focused on simplicity. It allows you to do the following: * Define the retention period of your data. The retention period is the minimum time your data will be stored in {es}. Data older than this period can be deleted by {es} at a later time. -* Improve the performance of your data stream by performing background operations that will optimise the way your data -stream is stored. +* Improve the performance of your data stream by performing background operations that will optimise the way your data stream is stored. + +**Elastic Curator** is a tool that allows you to manage your indices and snapshots using user-defined filters and predefined actions. If ILM provides the functionality to manage your index lifecycle, and you have at least a Basic license, consider using ILM in place of Curator. Many stack components make use of ILM by default. {curator-ref-current}/ilm.html[Learn more]. + +NOTE: <> is a deprecated Elasticsearch feature that allows you to manage the amount of data that is stored in your cluster, similar to the downsampling functionality of {ilm-init} and data stream lifecycle. This feature should not be used for new deployments. + +[TIP] +==== +{ilm-init} is not available on {es-serverless}. + +In an {ecloud} or self-managed environment, ILM lets you automatically transition indices through data tiers according to your performance needs and retention requirements. This allows you to balance hardware costs with performance. {es-serverless} eliminates this complexity by optimizing your cluster performance for you. + +Data stream lifecycle is an optimized lifecycle tool that lets you focus on the most common lifecycle management needs, without unnecessary hardware-centric concepts like data tiers. +==== -- include::ilm/index.asciidoc[] diff --git a/docs/reference/data-store-architecture.asciidoc b/docs/reference/data-store-architecture.asciidoc new file mode 100644 index 0000000000000..4ee75c15562ea --- /dev/null +++ b/docs/reference/data-store-architecture.asciidoc @@ -0,0 +1,18 @@ += Data store architecture + +[partintro] +-- + +{es} is a distributed document store. Instead of storing information as rows of columnar data, {es} stores complex data structures that have been serialized as JSON documents. When you have multiple {es} nodes in a cluster, stored documents are distributed across the cluster and can be accessed immediately +from any node. + +The topics in this section provides information about the architecture of {es} and how it stores and retrieves data: + +* <>: Learn about the basic building blocks of an {es} cluster, including nodes, shards, primaries, and replicas. +* <>: Learn how {es} replicates read and write operations across shards and shard copies. +* <>: Learn how {es} allocates and balances shards across nodes. +-- + +include::nodes-shards.asciidoc[] +include::docs/data-replication.asciidoc[leveloffset=-1] +include::modules/shard-ops.asciidoc[] \ No newline at end of file diff --git a/docs/reference/docs.asciidoc b/docs/reference/docs.asciidoc index 34662401842f4..ccdbaaffb2b77 100644 --- a/docs/reference/docs.asciidoc +++ b/docs/reference/docs.asciidoc @@ -7,9 +7,7 @@ For the most up-to-date API details, refer to {api-es}/group/endpoint-document[Document APIs]. -- -This section starts with a short introduction to {es}'s <>, followed by a detailed description of the following CRUD -APIs: +This section describes the following CRUD APIs: .Single document APIs * <> @@ -24,8 +22,6 @@ APIs: * <> * <> -include::docs/data-replication.asciidoc[] - include::docs/index_.asciidoc[] include::docs/get.asciidoc[] diff --git a/docs/reference/docs/data-replication.asciidoc b/docs/reference/docs/data-replication.asciidoc index 2c1a16c81d011..6ee266070e727 100644 --- a/docs/reference/docs/data-replication.asciidoc +++ b/docs/reference/docs/data-replication.asciidoc @@ -1,6 +1,6 @@ [[docs-replication]] -=== Reading and Writing documents +=== Reading and writing documents [discrete] ==== Introduction diff --git a/docs/reference/high-availability.asciidoc b/docs/reference/high-availability.asciidoc index 2f34b6bc1bb21..37e2a38aa0f2c 100644 --- a/docs/reference/high-availability.asciidoc +++ b/docs/reference/high-availability.asciidoc @@ -3,28 +3,28 @@ [partintro] -- -Your data is important to you. Keeping it safe and available is important -to {es}. Sometimes your cluster may experience hardware failure or a power -loss. To help you plan for this, {es} offers a number of features -to achieve high availability despite failures. +Your data is important to you. Keeping it safe and available is important to Elastic. Sometimes your cluster may experience hardware failure or a power loss. To help you plan for this, {es} offers a number of features to achieve high availability despite failures. Depending on your deployment type, you might need to provision servers in different zones or configure external repositories to meet your organization's availability needs. -* With proper planning, a cluster can be - <> to many of the - things that commonly go wrong, from the loss of a single node or network - connection right up to a zone-wide outage such as power loss. +* *<>* ++ +Distributed systems like Elasticsearch are designed to keep working even if some of their components have failed. An Elasticsearch cluster can continue operating normally if some of its nodes are unavailable or disconnected, as long as there are enough well-connected nodes to take over the unavailable node's responsibilities. ++ +If you're designing a smaller cluster, you might focus on making your cluster resilient to single-node failures. Designers of larger clusters must also consider cases where multiple nodes fail at the same time. +// need to improve connections to ECE, EC hosted, ECK pod/zone docs in the child topics -* You can use <> to replicate data to a remote _follower_ - cluster which may be in a different data centre or even on a different - continent from the leader cluster. The follower cluster acts as a hot - standby, ready for you to fail over in the event of a disaster so severe that - the leader cluster fails. The follower cluster can also act as a geo-replica - to serve searches from nearby clients. +* *<>* ++ +To effectively distribute read and write operations across nodes, the nodes in a cluster need good, reliable connections to each other. To provide better connections, you typically co-locate the nodes in the same data center or nearby data centers. ++ +Co-locating nodes in a single location exposes you to the risk of a single outage taking your entire cluster offline. To maintain high availability, you can prepare a second cluster that can take over in case of disaster by implementing {ccr} (CCR). ++ +CCR provides a way to automatically synchronize indices from a leader cluster to a follower cluster. This cluster could be in a different data center or even a different content from the leader cluster. If the primary cluster fails, the secondary cluster can take over. ++ +TIP: You can also use CCR to create secondary clusters to serve read requests in geo-proximity to your users. -* The last line of defence against data loss is to take - <> of your cluster so that you can - restore a completely fresh copy of it elsewhere if needed. +* *<>* ++ +Take snapshots of your cluster that can be restored in case of failure. -- include::high-availability/cluster-design.asciidoc[] - -include::ccr/index.asciidoc[] diff --git a/docs/reference/index.asciidoc b/docs/reference/index.asciidoc index 18052cfb64e8f..8e1c211eb9426 100644 --- a/docs/reference/index.asciidoc +++ b/docs/reference/index.asciidoc @@ -76,8 +76,12 @@ include::autoscaling/index.asciidoc[] include::snapshot-restore/index.asciidoc[] +include::ccr/index.asciidoc[leveloffset=-1] + // reference +include::data-store-architecture.asciidoc[] + include::rest-api/index.asciidoc[] include::commands/index.asciidoc[] diff --git a/docs/reference/intro.asciidoc b/docs/reference/intro.asciidoc index e0100b1c5640b..391439df2ae85 100644 --- a/docs/reference/intro.asciidoc +++ b/docs/reference/intro.asciidoc @@ -397,51 +397,18 @@ geographic location of your users and your resources. [[use-multiple-nodes-shards]] ==== Use multiple nodes and shards -[NOTE] -==== -Nodes and shards are what make {es} distributed and scalable. +When you move to production, you need to introduce multiple nodes and shards to your cluster. Nodes and shards are what make {es} distributed and scalable. The size and number of these nodes and shards depends on your data, your use case, and your budget. -These concepts aren’t essential if you’re just getting started. How you <> in production determines what you need to know: +These concepts aren't essential if you're just getting started. How you <> in production determines what you need to know: * *Self-managed {es}*: You are responsible for setting up and managing nodes, clusters, shards, and replicas. This includes managing the underlying infrastructure, scaling, and ensuring high availability through failover and backup strategies. * *Elastic Cloud*: Elastic can autoscale resources in response to workload changes. Choose from different deployment types to apply sensible defaults for your use case. A basic understanding of nodes, shards, and replicas is still important. -* *Elastic Cloud Serverless*: You don’t need to worry about nodes, shards, or replicas. These resources are 100% automated +* *Elastic Cloud Serverless*: You don't need to worry about nodes, shards, or replicas. These resources are 100% automated on the serverless platform, which is designed to scale with your workload. -==== - -You can add servers (_nodes_) to a cluster to increase capacity, and {es} automatically distributes your data and query load -across all of the available nodes. - -Elastic is able to distribute your data across nodes by subdividing an index into _shards_. Each index in {es} is a grouping -of one or more physical shards, where each shard is a self-contained Lucene index containing a subset of the documents in -the index. By distributing the documents in an index across multiple shards, and distributing those shards across multiple -nodes, {es} increases indexing and query capacity. - -There are two types of shards: _primaries_ and _replicas_. Each document in an index belongs to one primary shard. A replica -shard is a copy of a primary shard. Replicas maintain redundant copies of your data across the nodes in your cluster. -This protects against hardware failure and increases capacity to serve read requests like searching or retrieving a document. - -[TIP] -==== -The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can -be changed at any time, without interrupting indexing or query operations. -==== - -Shard copies in your cluster are automatically balanced across nodes to provide scale and high availability. All nodes are -aware of all the other nodes in the cluster and can forward client requests to the appropriate node. This allows {es} -to distribute indexing and query load across the cluster. - -If you’re exploring {es} for the first time or working in a development environment, then you can use a cluster with a single node and create indices -with only one shard. However, in a production environment, you should build a cluster with multiple nodes and indices -with multiple shards to increase performance and resilience. - -// TODO - diagram -To learn about optimizing the number and size of shards in your cluster, refer to <>. -To learn about how read and write operations are replicated across shards and shard copies, refer to <>. -To adjust how shards are allocated and balanced across nodes, refer to <>. +Learn more about <>. [discrete] [[ccr-disaster-recovery-geo-proximity]] diff --git a/docs/reference/modules/shard-ops.asciidoc b/docs/reference/modules/shard-ops.asciidoc index c0e5ee6a220f0..66ceebcfa0319 100644 --- a/docs/reference/modules/shard-ops.asciidoc +++ b/docs/reference/modules/shard-ops.asciidoc @@ -1,5 +1,5 @@ [[shard-allocation-relocation-recovery]] -=== Shard allocation, relocation, and recovery +== Shard allocation, relocation, and recovery Each <> in Elasticsearch is divided into one or more <>. Each document in an index belongs to a single shard. @@ -12,14 +12,16 @@ Over the course of normal operation, Elasticsearch allocates shard copies to nod TIP: To learn about optimizing the number and size of shards in your cluster, refer to <>. To learn about how read and write operations are replicated across shards and shard copies, refer to <>. +[discrete] [[shard-allocation]] -==== Shard allocation +=== Shard allocation include::{es-ref-dir}/modules/shard-allocation-desc.asciidoc[] By default, the primary and replica shard copies for an index can be allocated to any node in the cluster, and may be relocated to rebalance the cluster. -===== Adjust shard allocation settings +[discrete] +==== Adjust shard allocation settings You can control how shard copies are allocated using the following settings: @@ -27,7 +29,8 @@ You can control how shard copies are allocated using the following settings: - <>: Use these settings to control how the shard copies for a specific index are allocated. For example, you might want to allocate an index to a node in a specific data tier, or to an node with specific attributes. -===== Monitor shard allocation +[discrete] +==== Monitor shard allocation If a shard copy is unassigned, it means that the shard copy is not allocated to any node in the cluster. This can happen if there are not enough nodes in the cluster to allocate the shard copy, or if the shard copy can't be allocated to any node that satisfies the shard allocation filtering rules. When a shard copy is unassigned, your cluster is considered unhealthy and returns a yellow or red cluster health status. @@ -39,12 +42,14 @@ You can use the following APIs to monitor shard allocation: <>. +[discrete] [[shard-recovery]] -==== Shard recovery +=== Shard recovery include::{es-ref-dir}/modules/shard-recovery-desc.asciidoc[] -===== Adjust shard recovery settings +[discrete] +==== Adjust shard recovery settings To control how shards are recovered, for example the resources that can be used by recovery operations, and which indices should be prioritized for recovery, you can adjust the following settings: @@ -54,21 +59,24 @@ To control how shards are recovered, for example the resources that can be used Shard recovery operations also respect general shard allocation settings. -===== Monitor shard recovery +[discrete] +==== Monitor shard recovery You can use the following APIs to monitor shard allocation: - View a list of in-progress and completed recoveries using the <> - View detailed information about a specific recovery using the <> +[discrete] [[shard-relocation]] -==== Shard relocation +=== Shard relocation Shard relocation is the process of moving shard copies from one node to another. This can happen when a node joins or leaves the cluster, or when the cluster is rebalancing. When a shard copy is relocated, it is created as a new shard copy on the target node. When the shard copy is fully allocated and recovered, the old shard copy is deleted. If the shard copy being relocated is a primary, then the new shard copy is marked as primary before the old shard copy is deleted. -===== Adjust shard relocation settings +[discrete] +==== Adjust shard relocation settings You can control how and when shard copies are relocated. For example, you can adjust the rebalancing settings that control when shard copies are relocated to balance the cluster, or the high watermark for disk-based shard allocation that can trigger relocation. These settings are part of the <>. diff --git a/docs/reference/nodes-shards.asciidoc b/docs/reference/nodes-shards.asciidoc new file mode 100644 index 0000000000000..11095ed7b7eb3 --- /dev/null +++ b/docs/reference/nodes-shards.asciidoc @@ -0,0 +1,43 @@ +[[nodes-shards]] +== Nodes and shards + +[NOTE] +==== +Nodes and shards are what make {es} distributed and scalable. +These concepts aren't essential if you're just getting started. How you <> in production determines what you need to know: + +* *Self-managed {es}*: You are responsible for setting up and managing nodes, clusters, shards, and replicas. This includes managing the underlying infrastructure, scaling, and ensuring high availability through failover and backup strategies. +* *Elastic Cloud*: Elastic can autoscale resources in response to workload changes. Choose from different deployment types to apply sensible defaults for your use case. A basic understanding of nodes, shards, and replicas is still important. +* *Elastic Cloud Serverless*: You don't need to worry about nodes, shards, or replicas. These resources are 100% automated on the serverless platform, which is designed to scale with your workload. +==== + +You can add servers (_nodes_) to a cluster to increase capacity, and {es} automatically distributes your data and query load across all of the available nodes. + +Elastic is able to distribute your data across nodes by subdividing an index into _shards_. Each index in {es} is a grouping +of one or more physical shards, where each shard is a self-contained Lucene index containing a subset of the documents in +the index. By distributing the documents in an index across multiple shards, and distributing those shards across multiple +nodes, {es} increases indexing and query capacity. + +There are two types of shards: _primaries_ and _replicas_. Each document in an index belongs to one primary shard. A replica +shard is a copy of a primary shard. Replicas maintain redundant copies of your data across the nodes in your cluster. +This protects against hardware failure and increases capacity to serve read requests like searching or retrieving a document. + +[TIP] +==== +The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can +be changed at any time, without interrupting indexing or query operations. +==== + +Shard copies in your cluster are automatically balanced across nodes to provide scale and high availability. All nodes are +aware of all the other nodes in the cluster and can forward client requests to the appropriate node. This allows {es} +to distribute indexing and query load across the cluster. + +If you're exploring {es} for the first time or working in a development environment, then you can use a cluster with a single node and create indices +with only one shard. However, in a production environment, you should build a cluster with multiple nodes and indices +with multiple shards to increase performance and resilience. + +// TODO - diagram + +* To learn about optimizing the number and size of shards in your cluster, refer to <>. +* To learn about how read and write operations are replicated across shards and shard copies, refer to <>. +* To adjust how shards are allocated and balanced across nodes, refer to <>. \ No newline at end of file diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc index a284e563917c3..80828fdbfbb02 100644 --- a/docs/reference/setup.asciidoc +++ b/docs/reference/setup.asciidoc @@ -83,8 +83,6 @@ include::modules/indices/search-settings.asciidoc[] include::settings/security-settings.asciidoc[] -include::modules/shard-ops.asciidoc[] - include::modules/indices/request_cache.asciidoc[] include::settings/snapshot-settings.asciidoc[] diff --git a/libs/entitlement/bridge/src/main/java/org/elasticsearch/entitlement/bridge/EntitlementChecker.java b/libs/entitlement/bridge/src/main/java/org/elasticsearch/entitlement/bridge/EntitlementChecker.java index d44b4667f6821..8becc1e50ffcc 100644 --- a/libs/entitlement/bridge/src/main/java/org/elasticsearch/entitlement/bridge/EntitlementChecker.java +++ b/libs/entitlement/bridge/src/main/java/org/elasticsearch/entitlement/bridge/EntitlementChecker.java @@ -13,6 +13,11 @@ import java.net.URLStreamHandlerFactory; import java.util.List; +import javax.net.ssl.HostnameVerifier; +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLContext; +import javax.net.ssl.SSLSocketFactory; + @SuppressWarnings("unused") // Called from instrumentation code inserted by the Entitlements agent public interface EntitlementChecker { @@ -21,7 +26,7 @@ public interface EntitlementChecker { void check$java_lang_Runtime$halt(Class callerClass, Runtime runtime, int status); - // URLClassLoader ctor + // URLClassLoader constructors void check$java_net_URLClassLoader$(Class callerClass, URL[] urls); void check$java_net_URLClassLoader$(Class callerClass, URL[] urls, ClassLoader parent); @@ -32,6 +37,15 @@ public interface EntitlementChecker { void check$java_net_URLClassLoader$(Class callerClass, String name, URL[] urls, ClassLoader parent, URLStreamHandlerFactory factory); + // "setFactory" methods + void check$javax_net_ssl_HttpsURLConnection$setSSLSocketFactory(Class callerClass, HttpsURLConnection conn, SSLSocketFactory sf); + + void check$javax_net_ssl_HttpsURLConnection$$setDefaultSSLSocketFactory(Class callerClass, SSLSocketFactory sf); + + void check$javax_net_ssl_HttpsURLConnection$$setDefaultHostnameVerifier(Class callerClass, HostnameVerifier hv); + + void check$javax_net_ssl_SSLContext$$setDefault(Class callerClass, SSLContext context); + // Process creation void check$java_lang_ProcessBuilder$start(Class callerClass, ProcessBuilder that); diff --git a/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/RestEntitlementsCheckAction.java b/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/RestEntitlementsCheckAction.java index be2ace7c17528..4afceedbe3f01 100644 --- a/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/RestEntitlementsCheckAction.java +++ b/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/RestEntitlementsCheckAction.java @@ -23,12 +23,17 @@ import java.io.UncheckedIOException; import java.net.URL; import java.net.URLClassLoader; +import java.security.NoSuchAlgorithmException; import java.util.List; import java.util.Map; import java.util.Set; import java.util.stream.Collectors; +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLContext; + import static java.util.Map.entry; +import static org.elasticsearch.entitlement.qa.common.RestEntitlementsCheckAction.CheckAction.alwaysDenied; import static org.elasticsearch.entitlement.qa.common.RestEntitlementsCheckAction.CheckAction.deniedToPlugins; import static org.elasticsearch.entitlement.qa.common.RestEntitlementsCheckAction.CheckAction.forPlugins; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -49,6 +54,10 @@ static CheckAction deniedToPlugins(Runnable action) { static CheckAction forPlugins(Runnable action) { return new CheckAction(action, false); } + + static CheckAction alwaysDenied(Runnable action) { + return new CheckAction(action, true); + } } private static final Map checkActions = Map.ofEntries( @@ -56,9 +65,32 @@ static CheckAction forPlugins(Runnable action) { entry("runtime_halt", deniedToPlugins(RestEntitlementsCheckAction::runtimeHalt)), entry("create_classloader", forPlugins(RestEntitlementsCheckAction::createClassLoader)), entry("processBuilder_start", deniedToPlugins(RestEntitlementsCheckAction::processBuilder_start)), - entry("processBuilder_startPipeline", deniedToPlugins(RestEntitlementsCheckAction::processBuilder_startPipeline)) + entry("processBuilder_startPipeline", deniedToPlugins(RestEntitlementsCheckAction::processBuilder_startPipeline)), + entry("set_https_connection_properties", forPlugins(RestEntitlementsCheckAction::setHttpsConnectionProperties)), + entry("set_default_ssl_socket_factory", alwaysDenied(RestEntitlementsCheckAction::setDefaultSSLSocketFactory)), + entry("set_default_hostname_verifier", alwaysDenied(RestEntitlementsCheckAction::setDefaultHostnameVerifier)), + entry("set_default_ssl_context", alwaysDenied(RestEntitlementsCheckAction::setDefaultSSLContext)) ); + private static void setDefaultSSLContext() { + logger.info("Calling SSLContext.setDefault"); + try { + SSLContext.setDefault(SSLContext.getDefault()); + } catch (NoSuchAlgorithmException e) { + throw new RuntimeException(e); + } + } + + private static void setDefaultHostnameVerifier() { + logger.info("Calling HttpsURLConnection.setDefaultHostnameVerifier"); + HttpsURLConnection.setDefaultHostnameVerifier((hostname, session) -> false); + } + + private static void setDefaultSSLSocketFactory() { + logger.info("Calling HttpsURLConnection.setDefaultSSLSocketFactory"); + HttpsURLConnection.setDefaultSSLSocketFactory(new TestSSLSocketFactory()); + } + @SuppressForbidden(reason = "Specifically testing Runtime.exit") private static void runtimeExit() { Runtime.getRuntime().exit(123); @@ -93,11 +125,17 @@ private static void processBuilder_startPipeline() { } } + private static void setHttpsConnectionProperties() { + logger.info("Calling setSSLSocketFactory"); + var connection = new TestHttpsURLConnection(); + connection.setSSLSocketFactory(new TestSSLSocketFactory()); + } + public RestEntitlementsCheckAction(String prefix) { this.prefix = prefix; } - public static Set getServerAndPluginsCheckActions() { + public static Set getCheckActionsAllowedInPlugins() { return checkActions.entrySet() .stream() .filter(kv -> kv.getValue().isAlwaysDeniedToPlugins() == false) diff --git a/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/TestHttpsURLConnection.java b/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/TestHttpsURLConnection.java new file mode 100644 index 0000000000000..5a96e582db02b --- /dev/null +++ b/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/TestHttpsURLConnection.java @@ -0,0 +1,48 @@ +/* + * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one + * or more contributor license agreements. Licensed under the "Elastic License + * 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side + * Public License v 1"; you may not use this file except in compliance with, at + * your election, the "Elastic License 2.0", the "GNU Affero General Public + * License v3.0 only", or the "Server Side Public License, v 1". + */ + +package org.elasticsearch.entitlement.qa.common; + +import java.io.IOException; +import java.security.cert.Certificate; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLPeerUnverifiedException; + +class TestHttpsURLConnection extends HttpsURLConnection { + TestHttpsURLConnection() { + super(null); + } + + @Override + public void connect() throws IOException {} + + @Override + public void disconnect() {} + + @Override + public boolean usingProxy() { + return false; + } + + @Override + public String getCipherSuite() { + return ""; + } + + @Override + public Certificate[] getLocalCertificates() { + return new Certificate[0]; + } + + @Override + public Certificate[] getServerCertificates() throws SSLPeerUnverifiedException { + return new Certificate[0]; + } +} diff --git a/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/TestSSLSocketFactory.java b/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/TestSSLSocketFactory.java new file mode 100644 index 0000000000000..feb19df780175 --- /dev/null +++ b/libs/entitlement/qa/common/src/main/java/org/elasticsearch/entitlement/qa/common/TestSSLSocketFactory.java @@ -0,0 +1,54 @@ +/* + * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one + * or more contributor license agreements. Licensed under the "Elastic License + * 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side + * Public License v 1"; you may not use this file except in compliance with, at + * your election, the "Elastic License 2.0", the "GNU Affero General Public + * License v3.0 only", or the "Server Side Public License, v 1". + */ + +package org.elasticsearch.entitlement.qa.common; + +import java.io.IOException; +import java.net.InetAddress; +import java.net.Socket; +import java.net.UnknownHostException; + +import javax.net.ssl.SSLSocketFactory; + +class TestSSLSocketFactory extends SSLSocketFactory { + @Override + public Socket createSocket(String host, int port) throws IOException, UnknownHostException { + return null; + } + + @Override + public Socket createSocket(String host, int port, InetAddress localHost, int localPort) { + return null; + } + + @Override + public Socket createSocket(InetAddress host, int port) throws IOException { + return null; + } + + @Override + public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) throws IOException { + return null; + } + + @Override + public String[] getDefaultCipherSuites() { + return new String[0]; + } + + @Override + public String[] getSupportedCipherSuites() { + return new String[0]; + } + + @Override + public Socket createSocket(Socket s, String host, int port, boolean autoClose) throws IOException { + return null; + } +} diff --git a/libs/entitlement/qa/entitlement-allowed-nonmodular/src/main/plugin-metadata/entitlement-policy.yaml b/libs/entitlement/qa/entitlement-allowed-nonmodular/src/main/plugin-metadata/entitlement-policy.yaml index 45d4e57f66521..30fc9f0abeec0 100644 --- a/libs/entitlement/qa/entitlement-allowed-nonmodular/src/main/plugin-metadata/entitlement-policy.yaml +++ b/libs/entitlement/qa/entitlement-allowed-nonmodular/src/main/plugin-metadata/entitlement-policy.yaml @@ -1,2 +1,3 @@ ALL-UNNAMED: - create_class_loader + - set_https_connection_properties diff --git a/libs/entitlement/qa/entitlement-allowed/src/main/plugin-metadata/entitlement-policy.yaml b/libs/entitlement/qa/entitlement-allowed/src/main/plugin-metadata/entitlement-policy.yaml index 7b5e848f414b2..0a25570a9f624 100644 --- a/libs/entitlement/qa/entitlement-allowed/src/main/plugin-metadata/entitlement-policy.yaml +++ b/libs/entitlement/qa/entitlement-allowed/src/main/plugin-metadata/entitlement-policy.yaml @@ -1,2 +1,3 @@ org.elasticsearch.entitlement.qa.common: - create_class_loader + - set_https_connection_properties diff --git a/libs/entitlement/qa/src/javaRestTest/java/org/elasticsearch/entitlement/qa/EntitlementsAllowedIT.java b/libs/entitlement/qa/src/javaRestTest/java/org/elasticsearch/entitlement/qa/EntitlementsAllowedIT.java index 2fd4472f5cc65..c38e8b3f35efb 100644 --- a/libs/entitlement/qa/src/javaRestTest/java/org/elasticsearch/entitlement/qa/EntitlementsAllowedIT.java +++ b/libs/entitlement/qa/src/javaRestTest/java/org/elasticsearch/entitlement/qa/EntitlementsAllowedIT.java @@ -46,7 +46,7 @@ public EntitlementsAllowedIT(@Name("pathPrefix") String pathPrefix, @Name("actio public static Iterable data() { return Stream.of("allowed", "allowed_nonmodular") .flatMap( - path -> RestEntitlementsCheckAction.getServerAndPluginsCheckActions().stream().map(action -> new Object[] { path, action }) + path -> RestEntitlementsCheckAction.getCheckActionsAllowedInPlugins().stream().map(action -> new Object[] { path, action }) ) .toList(); } diff --git a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/initialization/EntitlementInitialization.java b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/initialization/EntitlementInitialization.java index c2ee935e0e5f3..aded5344024d3 100644 --- a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/initialization/EntitlementInitialization.java +++ b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/initialization/EntitlementInitialization.java @@ -9,6 +9,7 @@ package org.elasticsearch.entitlement.initialization; +import org.elasticsearch.core.Strings; import org.elasticsearch.core.internal.provider.ProviderLocator; import org.elasticsearch.entitlement.bootstrap.EntitlementBootstrap; import org.elasticsearch.entitlement.bridge.EntitlementChecker; @@ -120,7 +121,15 @@ private static Policy loadPluginPolicy(Path pluginRoot, boolean isModular, Strin // TODO: should this check actually be part of the parser? for (Scope scope : policy.scopes) { if (moduleNames.contains(scope.name) == false) { - throw new IllegalStateException("policy [" + policyFile + "] contains invalid module [" + scope.name + "]"); + throw new IllegalStateException( + Strings.format( + "Invalid module name in policy: plugin [%s] does not have module [%s]; available modules [%s]; policy file [%s]", + pluginName, + scope.name, + String.join(", ", moduleNames), + policyFile + ) + ); } } return policy; diff --git a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/api/ElasticsearchEntitlementChecker.java b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/api/ElasticsearchEntitlementChecker.java index 7ae7bc4238454..27bf9ea553d87 100644 --- a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/api/ElasticsearchEntitlementChecker.java +++ b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/api/ElasticsearchEntitlementChecker.java @@ -16,6 +16,11 @@ import java.net.URLStreamHandlerFactory; import java.util.List; +import javax.net.ssl.HostnameVerifier; +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLContext; +import javax.net.ssl.SSLSocketFactory; + /** * Implementation of the {@link EntitlementChecker} interface, providing additional * API methods for managing the checks. @@ -78,4 +83,28 @@ public ElasticsearchEntitlementChecker(PolicyManager policyManager) { public void check$java_lang_ProcessBuilder$$startPipeline(Class callerClass, List builders) { policyManager.checkStartProcess(callerClass); } + + @Override + public void check$javax_net_ssl_HttpsURLConnection$setSSLSocketFactory( + Class callerClass, + HttpsURLConnection connection, + SSLSocketFactory sf + ) { + policyManager.checkSetHttpsConnectionProperties(callerClass); + } + + @Override + public void check$javax_net_ssl_HttpsURLConnection$$setDefaultSSLSocketFactory(Class callerClass, SSLSocketFactory sf) { + policyManager.checkSetGlobalHttpsConnectionProperties(callerClass); + } + + @Override + public void check$javax_net_ssl_HttpsURLConnection$$setDefaultHostnameVerifier(Class callerClass, HostnameVerifier hv) { + policyManager.checkSetGlobalHttpsConnectionProperties(callerClass); + } + + @Override + public void check$javax_net_ssl_SSLContext$$setDefault(Class callerClass, SSLContext context) { + policyManager.checkSetGlobalHttpsConnectionProperties(callerClass); + } } diff --git a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/PolicyManager.java b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/PolicyManager.java index 527a9472a7cef..330c7e59c60c7 100644 --- a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/PolicyManager.java +++ b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/PolicyManager.java @@ -130,6 +130,14 @@ public void checkCreateClassLoader(Class callerClass) { checkEntitlementPresent(callerClass, CreateClassLoaderEntitlement.class); } + public void checkSetHttpsConnectionProperties(Class callerClass) { + checkEntitlementPresent(callerClass, SetHttpsConnectionPropertiesEntitlement.class); + } + + public void checkSetGlobalHttpsConnectionProperties(Class callerClass) { + neverEntitled(callerClass, "set global https connection properties"); + } + private void checkEntitlementPresent(Class callerClass, Class entitlementClass) { var requestingModule = requestingModule(callerClass); if (isTriviallyAllowed(requestingModule)) { diff --git a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/PolicyParser.java b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/PolicyParser.java index fb63d5ffbeb48..013acf8f22fae 100644 --- a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/PolicyParser.java +++ b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/PolicyParser.java @@ -34,8 +34,11 @@ */ public class PolicyParser { - private static final Map> EXTERNAL_ENTITLEMENTS = Stream.of(FileEntitlement.class, CreateClassLoaderEntitlement.class) - .collect(Collectors.toUnmodifiableMap(PolicyParser::getEntitlementTypeName, Function.identity())); + private static final Map> EXTERNAL_ENTITLEMENTS = Stream.of( + FileEntitlement.class, + CreateClassLoaderEntitlement.class, + SetHttpsConnectionPropertiesEntitlement.class + ).collect(Collectors.toUnmodifiableMap(PolicyParser::getEntitlementTypeName, Function.identity())); protected final XContentParser policyParser; protected final String policyName; diff --git a/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/SetHttpsConnectionPropertiesEntitlement.java b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/SetHttpsConnectionPropertiesEntitlement.java new file mode 100644 index 0000000000000..6f165f27b31ff --- /dev/null +++ b/libs/entitlement/src/main/java/org/elasticsearch/entitlement/runtime/policy/SetHttpsConnectionPropertiesEntitlement.java @@ -0,0 +1,18 @@ +/* + * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one + * or more contributor license agreements. Licensed under the "Elastic License + * 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side + * Public License v 1"; you may not use this file except in compliance with, at + * your election, the "Elastic License 2.0", the "GNU Affero General Public + * License v3.0 only", or the "Server Side Public License, v 1". + */ + +package org.elasticsearch.entitlement.runtime.policy; + +/** + * An Entitlement to allow setting properties to a single Https connection after this has been created + */ +public class SetHttpsConnectionPropertiesEntitlement implements Entitlement { + @ExternalEntitlement(esModulesOnly = false) + public SetHttpsConnectionPropertiesEntitlement() {} +} diff --git a/libs/entitlement/src/test/java/org/elasticsearch/entitlement/runtime/policy/PolicyParserTests.java b/libs/entitlement/src/test/java/org/elasticsearch/entitlement/runtime/policy/PolicyParserTests.java index 633c76cb8c04f..bee8767fcd900 100644 --- a/libs/entitlement/src/test/java/org/elasticsearch/entitlement/runtime/policy/PolicyParserTests.java +++ b/libs/entitlement/src/test/java/org/elasticsearch/entitlement/runtime/policy/PolicyParserTests.java @@ -74,4 +74,23 @@ public void testParseCreateClassloader() throws IOException { ) ); } + + public void testParseSetHttpsConnectionProperties() throws IOException { + Policy parsedPolicy = new PolicyParser(new ByteArrayInputStream(""" + entitlement-module-name: + - set_https_connection_properties + """.getBytes(StandardCharsets.UTF_8)), "test-policy.yaml", true).parsePolicy(); + Policy builtPolicy = new Policy( + "test-policy.yaml", + List.of(new Scope("entitlement-module-name", List.of(new CreateClassLoaderEntitlement()))) + ); + assertThat( + parsedPolicy.scopes, + contains( + both(transformedMatch((Scope scope) -> scope.name, equalTo("entitlement-module-name"))).and( + transformedMatch(scope -> scope.entitlements, contains(instanceOf(SetHttpsConnectionPropertiesEntitlement.class))) + ) + ) + ); + } } diff --git a/modules/apm/src/main/plugin-metadata/entitlement-policy.yaml b/modules/apm/src/main/plugin-metadata/entitlement-policy.yaml new file mode 100644 index 0000000000000..30b2bd1978d1b --- /dev/null +++ b/modules/apm/src/main/plugin-metadata/entitlement-policy.yaml @@ -0,0 +1,2 @@ +elastic.apm.agent: + - set_https_connection_properties diff --git a/modules/repository-gcs/src/main/plugin-metadata/entitlement-policy.yaml b/modules/repository-gcs/src/main/plugin-metadata/entitlement-policy.yaml new file mode 100644 index 0000000000000..a1ff54f02d969 --- /dev/null +++ b/modules/repository-gcs/src/main/plugin-metadata/entitlement-policy.yaml @@ -0,0 +1,2 @@ +ALL-UNNAMED: + - set_https_connection_properties # required by google-http-client diff --git a/plugins/discovery-gce/src/main/plugin-metadata/entitlement-policy.yaml b/plugins/discovery-gce/src/main/plugin-metadata/entitlement-policy.yaml new file mode 100644 index 0000000000000..a1ff54f02d969 --- /dev/null +++ b/plugins/discovery-gce/src/main/plugin-metadata/entitlement-policy.yaml @@ -0,0 +1,2 @@ +ALL-UNNAMED: + - set_https_connection_properties # required by google-http-client diff --git a/server/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java b/server/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java index 9be23c91db072..6822c201ab030 100644 --- a/server/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java +++ b/server/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java @@ -213,7 +213,6 @@ private static void initPhase2(Bootstrap bootstrap) throws IOException { // load the plugin Java modules and layers now for use in entitlements var pluginsLoader = PluginsLoader.createPluginsLoader(nodeEnv.modulesFile(), nodeEnv.pluginsFile()); bootstrap.setPluginsLoader(pluginsLoader); - var pluginsResolver = PluginsResolver.create(pluginsLoader); if (Boolean.parseBoolean(System.getProperty("es.entitlements.enabled"))) { LogManager.getLogger(Elasticsearch.class).info("Bootstrapping Entitlements"); @@ -227,6 +226,8 @@ private static void initPhase2(Bootstrap bootstrap) throws IOException { .map(bundle -> new EntitlementBootstrap.PluginData(bundle.getDir(), bundle.pluginDescriptor().isModular(), true)) ).toList(); + var pluginsResolver = PluginsResolver.create(pluginsLoader); + EntitlementBootstrap.bootstrap(pluginData, pluginsResolver::resolveClassToPluginName); } else if (RuntimeVersionFeature.isSecurityManagerAvailable()) { // install SM after natives, shutdown hooks, etc. diff --git a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/store/KibanaOwnedReservedRoleDescriptors.java b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/store/KibanaOwnedReservedRoleDescriptors.java index 5e19b26b8f4de..d8b4b15307c47 100644 --- a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/store/KibanaOwnedReservedRoleDescriptors.java +++ b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/store/KibanaOwnedReservedRoleDescriptors.java @@ -236,7 +236,10 @@ static RoleDescriptor kibanaSystem(String name) { // Observability, etc. // Kibana system user creates these indices; reads / writes to them via the // aliases (see below). - RoleDescriptor.IndicesPrivileges.builder().indices(ReservedRolesStore.ALERTS_BACKING_INDEX).privileges("all").build(), + RoleDescriptor.IndicesPrivileges.builder() + .indices(ReservedRolesStore.ALERTS_BACKING_INDEX, ReservedRolesStore.ALERTS_BACKING_INDEX_REINDEXED) + .privileges("all") + .build(), // "Alerts as data" public index aliases used in Security Solution, // Observability, etc. // Kibana system user uses them to read / write alerts. @@ -248,7 +251,7 @@ static RoleDescriptor kibanaSystem(String name) { // Kibana system user creates these indices; reads / writes to them via the // aliases (see below). RoleDescriptor.IndicesPrivileges.builder() - .indices(ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX_ALIAS) + .indices(ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX, ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX_REINDEXED) .privileges("all") .build(), // Endpoint / Fleet policy responses. Kibana requires read access to send diff --git a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/store/ReservedRolesStore.java b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/store/ReservedRolesStore.java index bdaf75203ee5d..e43ae2d1b360b 100644 --- a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/store/ReservedRolesStore.java +++ b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/store/ReservedRolesStore.java @@ -46,6 +46,7 @@ public class ReservedRolesStore implements BiConsumer, ActionListene /** Alerts, Rules, Cases (RAC) index used by multiple solutions */ public static final String ALERTS_BACKING_INDEX = ".internal.alerts*"; + public static final String ALERTS_BACKING_INDEX_REINDEXED = ".reindexed-v8-internal.alerts*"; /** Alerts, Rules, Cases (RAC) index used by multiple solutions */ public static final String ALERTS_INDEX_ALIAS = ".alerts*"; @@ -54,7 +55,8 @@ public class ReservedRolesStore implements BiConsumer, ActionListene public static final String PREVIEW_ALERTS_INDEX_ALIAS = ".preview.alerts*"; /** Alerts, Rules, Cases (RAC) preview index used by multiple solutions */ - public static final String PREVIEW_ALERTS_BACKING_INDEX_ALIAS = ".internal.preview.alerts*"; + public static final String PREVIEW_ALERTS_BACKING_INDEX = ".internal.preview.alerts*"; + public static final String PREVIEW_ALERTS_BACKING_INDEX_REINDEXED = ".reindexed-v8-internal.preview.alerts*"; /** "Security Solutions" only lists index for value lists for detections */ public static final String LISTS_INDEX = ".lists-*"; @@ -885,8 +887,10 @@ private static RoleDescriptor buildEditorRoleDescriptor() { RoleDescriptor.IndicesPrivileges.builder() .indices( ReservedRolesStore.ALERTS_BACKING_INDEX, + ReservedRolesStore.ALERTS_BACKING_INDEX_REINDEXED, ReservedRolesStore.ALERTS_INDEX_ALIAS, - ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX_ALIAS, + ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX, + ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX_REINDEXED, ReservedRolesStore.PREVIEW_ALERTS_INDEX_ALIAS ) .privileges("read", "view_index_metadata", "write", "maintenance") diff --git a/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/security/authz/store/ReservedRolesStoreTests.java b/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/security/authz/store/ReservedRolesStoreTests.java index 937ab03010ff1..a96b26ddcb1eb 100644 --- a/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/security/authz/store/ReservedRolesStoreTests.java +++ b/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/security/authz/store/ReservedRolesStoreTests.java @@ -614,9 +614,11 @@ public void testKibanaSystemRole() { ".apm-source-map", ReservedRolesStore.ALERTS_LEGACY_INDEX + randomAlphaOfLength(randomIntBetween(0, 13)), ReservedRolesStore.ALERTS_BACKING_INDEX + randomAlphaOfLength(randomIntBetween(0, 13)), + ReservedRolesStore.ALERTS_BACKING_INDEX_REINDEXED + randomAlphaOfLength(randomIntBetween(0, 13)), ReservedRolesStore.ALERTS_INDEX_ALIAS + randomAlphaOfLength(randomIntBetween(0, 13)), ReservedRolesStore.PREVIEW_ALERTS_INDEX_ALIAS + randomAlphaOfLength(randomIntBetween(0, 13)), - ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX_ALIAS + randomAlphaOfLength(randomIntBetween(0, 13)), + ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX + randomAlphaOfLength(randomIntBetween(0, 13)), + ReservedRolesStore.PREVIEW_ALERTS_BACKING_INDEX_REINDEXED + randomAlphaOfLength(randomIntBetween(0, 13)), ReservedRolesStore.LISTS_INDEX + randomAlphaOfLength(randomIntBetween(0, 13)), ReservedRolesStore.LISTS_ITEMS_INDEX + randomAlphaOfLength(randomIntBetween(0, 13)), ".slo-observability." + randomAlphaOfLength(randomIntBetween(0, 13)) diff --git a/x-pack/plugin/esql/qa/security/src/javaRestTest/java/org/elasticsearch/xpack/esql/EsqlSecurityIT.java b/x-pack/plugin/esql/qa/security/src/javaRestTest/java/org/elasticsearch/xpack/esql/EsqlSecurityIT.java index 00cf4d63af335..ce4aa8582929b 100644 --- a/x-pack/plugin/esql/qa/security/src/javaRestTest/java/org/elasticsearch/xpack/esql/EsqlSecurityIT.java +++ b/x-pack/plugin/esql/qa/security/src/javaRestTest/java/org/elasticsearch/xpack/esql/EsqlSecurityIT.java @@ -548,7 +548,7 @@ record Listen(long timestamp, String songId, double duration) { public void testLookupJoinIndexAllowed() throws Exception { assumeTrue( "Requires LOOKUP JOIN capability", - EsqlSpecTestCase.hasCapabilities(adminClient(), List.of(EsqlCapabilities.Cap.JOIN_LOOKUP_V9.capabilityName())) + EsqlSpecTestCase.hasCapabilities(adminClient(), List.of(EsqlCapabilities.Cap.JOIN_LOOKUP_V10.capabilityName())) ); Response resp = runESQLCommand( @@ -587,7 +587,7 @@ public void testLookupJoinIndexAllowed() throws Exception { public void testLookupJoinIndexForbidden() throws Exception { assumeTrue( "Requires LOOKUP JOIN capability", - EsqlSpecTestCase.hasCapabilities(adminClient(), List.of(EsqlCapabilities.Cap.JOIN_LOOKUP_V9.capabilityName())) + EsqlSpecTestCase.hasCapabilities(adminClient(), List.of(EsqlCapabilities.Cap.JOIN_LOOKUP_V10.capabilityName())) ); var resp = expectThrows( diff --git a/x-pack/plugin/esql/qa/server/mixed-cluster/src/javaRestTest/java/org/elasticsearch/xpack/esql/qa/mixed/MixedClusterEsqlSpecIT.java b/x-pack/plugin/esql/qa/server/mixed-cluster/src/javaRestTest/java/org/elasticsearch/xpack/esql/qa/mixed/MixedClusterEsqlSpecIT.java index 9a09401785df0..b22925b44ebab 100644 --- a/x-pack/plugin/esql/qa/server/mixed-cluster/src/javaRestTest/java/org/elasticsearch/xpack/esql/qa/mixed/MixedClusterEsqlSpecIT.java +++ b/x-pack/plugin/esql/qa/server/mixed-cluster/src/javaRestTest/java/org/elasticsearch/xpack/esql/qa/mixed/MixedClusterEsqlSpecIT.java @@ -21,7 +21,7 @@ import java.util.List; import static org.elasticsearch.xpack.esql.CsvTestUtils.isEnabled; -import static org.elasticsearch.xpack.esql.action.EsqlCapabilities.Cap.JOIN_LOOKUP_V9; +import static org.elasticsearch.xpack.esql.action.EsqlCapabilities.Cap.JOIN_LOOKUP_V10; import static org.elasticsearch.xpack.esql.qa.rest.EsqlSpecTestCase.Mode.ASYNC; public class MixedClusterEsqlSpecIT extends EsqlSpecTestCase { @@ -96,7 +96,7 @@ protected boolean supportsInferenceTestService() { @Override protected boolean supportsIndexModeLookup() throws IOException { - return hasCapabilities(List.of(JOIN_LOOKUP_V9.capabilityName())); + return hasCapabilities(List.of(JOIN_LOOKUP_V10.capabilityName())); } @Override diff --git a/x-pack/plugin/esql/qa/server/multi-clusters/src/javaRestTest/java/org/elasticsearch/xpack/esql/ccq/MultiClusterSpecIT.java b/x-pack/plugin/esql/qa/server/multi-clusters/src/javaRestTest/java/org/elasticsearch/xpack/esql/ccq/MultiClusterSpecIT.java index a809216d3beb3..987a5334f903c 100644 --- a/x-pack/plugin/esql/qa/server/multi-clusters/src/javaRestTest/java/org/elasticsearch/xpack/esql/ccq/MultiClusterSpecIT.java +++ b/x-pack/plugin/esql/qa/server/multi-clusters/src/javaRestTest/java/org/elasticsearch/xpack/esql/ccq/MultiClusterSpecIT.java @@ -48,7 +48,7 @@ import static org.elasticsearch.xpack.esql.EsqlTestUtils.classpathResources; import static org.elasticsearch.xpack.esql.action.EsqlCapabilities.Cap.INLINESTATS; import static org.elasticsearch.xpack.esql.action.EsqlCapabilities.Cap.INLINESTATS_V2; -import static org.elasticsearch.xpack.esql.action.EsqlCapabilities.Cap.JOIN_LOOKUP_V9; +import static org.elasticsearch.xpack.esql.action.EsqlCapabilities.Cap.JOIN_LOOKUP_V10; import static org.elasticsearch.xpack.esql.action.EsqlCapabilities.Cap.JOIN_PLANNING_V1; import static org.elasticsearch.xpack.esql.action.EsqlCapabilities.Cap.METADATA_FIELDS_REMOTE_TEST; import static org.elasticsearch.xpack.esql.qa.rest.EsqlSpecTestCase.Mode.SYNC; @@ -124,7 +124,7 @@ protected void shouldSkipTest(String testName) throws IOException { assumeFalse("INLINESTATS not yet supported in CCS", testCase.requiredCapabilities.contains(INLINESTATS.capabilityName())); assumeFalse("INLINESTATS not yet supported in CCS", testCase.requiredCapabilities.contains(INLINESTATS_V2.capabilityName())); assumeFalse("INLINESTATS not yet supported in CCS", testCase.requiredCapabilities.contains(JOIN_PLANNING_V1.capabilityName())); - assumeFalse("LOOKUP JOIN not yet supported in CCS", testCase.requiredCapabilities.contains(JOIN_LOOKUP_V9.capabilityName())); + assumeFalse("LOOKUP JOIN not yet supported in CCS", testCase.requiredCapabilities.contains(JOIN_LOOKUP_V10.capabilityName())); } private TestFeatureService remoteFeaturesService() throws IOException { @@ -283,8 +283,8 @@ protected boolean supportsInferenceTestService() { @Override protected boolean supportsIndexModeLookup() throws IOException { - // CCS does not yet support JOIN_LOOKUP_V9 and clusters falsely report they have this capability - // return hasCapabilities(List.of(JOIN_LOOKUP_V9.capabilityName())); + // CCS does not yet support JOIN_LOOKUP_V10 and clusters falsely report they have this capability + // return hasCapabilities(List.of(JOIN_LOOKUP_V10.capabilityName())); return false; } } diff --git a/x-pack/plugin/esql/qa/server/src/main/java/org/elasticsearch/xpack/esql/qa/rest/RequestIndexFilteringTestCase.java b/x-pack/plugin/esql/qa/server/src/main/java/org/elasticsearch/xpack/esql/qa/rest/RequestIndexFilteringTestCase.java index a83b6cf2e906c..ba93e9b31bb09 100644 --- a/x-pack/plugin/esql/qa/server/src/main/java/org/elasticsearch/xpack/esql/qa/rest/RequestIndexFilteringTestCase.java +++ b/x-pack/plugin/esql/qa/server/src/main/java/org/elasticsearch/xpack/esql/qa/rest/RequestIndexFilteringTestCase.java @@ -221,7 +221,7 @@ public void testIndicesDontExist() throws IOException { assertThat(e.getMessage(), containsString("index_not_found_exception")); assertThat(e.getMessage(), containsString("no such index [foo]")); - if (EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()) { + if (EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()) { e = expectThrows( ResponseException.class, () -> runEsql(timestampFilter("gte", "2020-01-01").query("FROM test1 | LOOKUP JOIN foo ON id1")) diff --git a/x-pack/plugin/esql/qa/testFixtures/src/main/resources/lookup-join.csv-spec b/x-pack/plugin/esql/qa/testFixtures/src/main/resources/lookup-join.csv-spec index 309386228b1c8..9b1356438141c 100644 --- a/x-pack/plugin/esql/qa/testFixtures/src/main/resources/lookup-join.csv-spec +++ b/x-pack/plugin/esql/qa/testFixtures/src/main/resources/lookup-join.csv-spec @@ -8,7 +8,7 @@ ############################################### basicOnTheDataNode -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | EVAL language_code = languages @@ -25,7 +25,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; basicRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW language_code = 1 | LOOKUP JOIN languages_lookup ON language_code @@ -36,7 +36,7 @@ language_code:integer | language_name:keyword ; basicOnTheCoordinator -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | SORT emp_no @@ -53,7 +53,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; subsequentEvalOnTheDataNode -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | EVAL language_code = languages @@ -71,7 +71,7 @@ emp_no:integer | language_code:integer | language_name:keyword | language_code_x ; subsequentEvalOnTheCoordinator -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | SORT emp_no @@ -89,7 +89,7 @@ emp_no:integer | language_code:integer | language_name:keyword | language_code_x ; sortEvalBeforeLookup -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | SORT emp_no @@ -106,7 +106,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; nonUniqueLeftKeyOnTheDataNode -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | WHERE emp_no <= 10030 @@ -130,7 +130,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; nonUniqueRightKeyOnTheDataNode -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | EVAL language_code = emp_no % 10 @@ -150,7 +150,7 @@ emp_no:integer | language_code:integer | language_name:keyword | country:k ; nonUniqueRightKeyOnTheCoordinator -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | SORT emp_no @@ -170,7 +170,7 @@ emp_no:integer | language_code:integer | language_name:keyword | country:k ; nonUniqueRightKeyFromRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW language_code = 2 | LOOKUP JOIN languages_lookup_non_unique_key ON language_code @@ -183,7 +183,7 @@ language_code:integer | language_name:keyword | country:keyword ; repeatedIndexOnFrom -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM languages_lookup | LOOKUP JOIN languages_lookup ON language_code @@ -201,7 +201,7 @@ dropAllLookedUpFieldsOnTheDataNode-Ignore // Depends on // https://github.com/elastic/elasticsearch/issues/118778 // https://github.com/elastic/elasticsearch/issues/118781 -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | EVAL language_code = emp_no % 10 @@ -222,7 +222,7 @@ dropAllLookedUpFieldsOnTheCoordinator-Ignore // Depends on // https://github.com/elastic/elasticsearch/issues/118778 // https://github.com/elastic/elasticsearch/issues/118781 -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | SORT emp_no @@ -247,7 +247,7 @@ emp_no:integer ############################################### filterOnLeftSide -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | EVAL language_code = languages @@ -264,7 +264,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; filterOnRightSide -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -280,7 +280,7 @@ FROM sample_data ; filterOnRightSideAfterStats -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -293,7 +293,7 @@ count:long | type:keyword ; filterOnJoinKey -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | EVAL language_code = languages @@ -308,7 +308,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; filterOnJoinKeyAndRightSide -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | WHERE emp_no < 10006 @@ -325,7 +325,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; filterOnRightSideOnTheCoordinator -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | SORT emp_no @@ -341,7 +341,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; filterOnJoinKeyOnTheCoordinator -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | SORT emp_no @@ -357,7 +357,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; filterOnJoinKeyAndRightSideOnTheCoordinator -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | SORT emp_no @@ -374,7 +374,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; filterOnTheDataNodeThenFilterOnTheCoordinator -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | EVAL language_code = languages @@ -395,7 +395,7 @@ emp_no:integer | language_code:integer | language_name:keyword ########################################################################### nullJoinKeyOnTheDataNode -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | WHERE emp_no < 10004 @@ -412,7 +412,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; mvJoinKeyOnTheDataNode -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | WHERE 10003 < emp_no AND emp_no < 10008 @@ -430,7 +430,7 @@ emp_no:integer | language_code:integer | language_name:keyword ; mvJoinKeyFromRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW language_code = [4, 5, 6, 7] | LOOKUP JOIN languages_lookup_non_unique_key ON language_code @@ -443,7 +443,7 @@ language_code:integer | language_name:keyword | country:keyword ; mvJoinKeyFromRowExpanded -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW language_code = [4, 5, 6, 7, 8] | MV_EXPAND language_code @@ -465,7 +465,7 @@ language_code:integer | language_name:keyword | country:keyword ########################################################################### joinOnNestedField -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM employees | WHERE 10000 < emp_no AND emp_no < 10006 @@ -485,7 +485,7 @@ emp_no:integer | language.id:integer | language.name:text joinOnNestedFieldRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW language.code = "EN" | LOOKUP JOIN languages_nested_fields ON language.code @@ -498,7 +498,7 @@ language.id:integer | language.code:keyword | language.name.keyword:keyword joinOnNestedNestedFieldRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW language.name.keyword = "English" | LOOKUP JOIN languages_nested_fields ON language.name.keyword @@ -514,7 +514,7 @@ language.id:integer | language.name:text | language.name.keyword:keyword ############################################### lookupIPFromRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", right = "right" | LOOKUP JOIN clientips_lookup ON client_ip @@ -525,7 +525,7 @@ left | 172.21.0.5 | right | Development ; lookupIPFromKeepRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", right = "right" | KEEP left, client_ip, right @@ -537,7 +537,7 @@ left | 172.21.0.5 | right | Development ; lookupIPFromRowWithShadowing -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", env = "env", right = "right" | LOOKUP JOIN clientips_lookup ON client_ip @@ -548,7 +548,7 @@ left | 172.21.0.5 | right | Development ; lookupIPFromRowWithShadowingKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", env = "env", right = "right" | EVAL client_ip = client_ip::keyword @@ -561,7 +561,7 @@ left | 172.21.0.5 | right | Development ; lookupIPFromRowWithShadowingKeepReordered -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", env = "env", right = "right" | EVAL client_ip = client_ip::keyword @@ -574,7 +574,7 @@ right | Development | 172.21.0.5 ; lookupIPFromIndex -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -593,7 +593,7 @@ ignoreOrder:true ; lookupIPFromIndexKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -613,7 +613,7 @@ ignoreOrder:true ; lookupIPFromIndexKeepKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | KEEP client_ip, event_duration, @timestamp, message @@ -635,7 +635,7 @@ timestamp:date | client_ip:keyword | event_duration:long | msg:keyword ; lookupIPFromIndexStats -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -651,7 +651,7 @@ count:long | env:keyword ; lookupIPFromIndexStatsKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -668,7 +668,7 @@ count:long | env:keyword ; statsAndLookupIPFromIndex -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -689,7 +689,7 @@ count:long | client_ip:keyword | env:keyword ############################################### lookupMessageFromRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", message = "Connected to 10.1.0.1", right = "right" | LOOKUP JOIN message_types_lookup ON message @@ -700,7 +700,7 @@ left | Connected to 10.1.0.1 | right | Success ; lookupMessageFromKeepRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", message = "Connected to 10.1.0.1", right = "right" | KEEP left, message, right @@ -712,7 +712,7 @@ left | Connected to 10.1.0.1 | right | Success ; lookupMessageFromRowWithShadowing -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", message = "Connected to 10.1.0.1", type = "unknown", right = "right" | LOOKUP JOIN message_types_lookup ON message @@ -723,7 +723,7 @@ left | Connected to 10.1.0.1 | right | Success ; lookupMessageFromRowWithShadowingKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", message = "Connected to 10.1.0.1", type = "unknown", right = "right" | LOOKUP JOIN message_types_lookup ON message @@ -735,7 +735,7 @@ left | Connected to 10.1.0.1 | right | Success ; lookupMessageFromIndex -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -753,7 +753,7 @@ ignoreOrder:true ; lookupMessageFromIndexKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -772,7 +772,7 @@ ignoreOrder:true ; lookupMessageFromIndexKeepKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | KEEP client_ip, event_duration, @timestamp, message @@ -792,7 +792,7 @@ ignoreOrder:true ; lookupMessageFromIndexKeepReordered -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -811,7 +811,7 @@ Success | 172.21.2.162 | 3450233 | Connected to 10.1.0.3 ; lookupMessageFromIndexStats -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -826,7 +826,7 @@ count:long | type:keyword ; lookupMessageFromIndexStatsKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -842,7 +842,7 @@ count:long | type:keyword ; statsAndLookupMessageFromIndex -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | STATS count = count(message) BY message @@ -860,7 +860,7 @@ count:long | type:keyword | message:keyword ; lookupMessageFromIndexTwice -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -882,7 +882,7 @@ ignoreOrder:true ; lookupMessageFromIndexTwiceKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -905,7 +905,7 @@ ignoreOrder:true ; lookupMessageFromIndexTwiceFullyShadowing -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | LOOKUP JOIN message_types_lookup ON message @@ -929,7 +929,7 @@ ignoreOrder:true ############################################### lookupIPAndMessageFromRow -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", right = "right" | LOOKUP JOIN clientips_lookup ON client_ip @@ -941,7 +941,7 @@ left | 172.21.0.5 | Connected to 10.1.0.1 | right | Devel ; lookupIPAndMessageFromRowKeepBefore -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", right = "right" | KEEP left, client_ip, message, right @@ -954,7 +954,7 @@ left | 172.21.0.5 | Connected to 10.1.0.1 | right | Devel ; lookupIPAndMessageFromRowKeepBetween -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", right = "right" | LOOKUP JOIN clientips_lookup ON client_ip @@ -967,7 +967,7 @@ left | 172.21.0.5 | Connected to 10.1.0.1 | right | Devel ; lookupIPAndMessageFromRowKeepAfter -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", right = "right" | LOOKUP JOIN clientips_lookup ON client_ip @@ -980,7 +980,7 @@ left | 172.21.0.5 | Connected to 10.1.0.1 | right | Devel ; lookupIPAndMessageFromRowWithShadowing -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", env = "env", type = "type", right = "right" | LOOKUP JOIN clientips_lookup ON client_ip @@ -992,7 +992,7 @@ left | 172.21.0.5 | Connected to 10.1.0.1 | right | Devel ; lookupIPAndMessageFromRowWithShadowingKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", env = "env", right = "right" | EVAL client_ip = client_ip::keyword @@ -1006,7 +1006,7 @@ left | 172.21.0.5 | Connected to 10.1.0.1 | right | Devel ; lookupIPAndMessageFromRowWithShadowingKeepKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", env = "env", right = "right" | EVAL client_ip = client_ip::keyword @@ -1021,7 +1021,7 @@ left | 172.21.0.5 | Connected to 10.1.0.1 | right | Devel ; lookupIPAndMessageFromRowWithShadowingKeepKeepKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", env = "env", right = "right" | EVAL client_ip = client_ip::keyword @@ -1037,7 +1037,7 @@ left | 172.21.0.5 | Connected to 10.1.0.1 | right | Devel ; lookupIPAndMessageFromRowWithShadowingKeepReordered -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 ROW left = "left", client_ip = "172.21.0.5", message = "Connected to 10.1.0.1", env = "env", right = "right" | EVAL client_ip = client_ip::keyword @@ -1051,7 +1051,7 @@ right | Development | Success | 172.21.0.5 ; lookupIPAndMessageFromIndex -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -1071,7 +1071,7 @@ ignoreOrder:true ; lookupIPAndMessageFromIndexKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -1092,7 +1092,7 @@ ignoreOrder:true ; lookupIPAndMessageFromIndexStats -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -1110,7 +1110,7 @@ count:long | env:keyword | type:keyword ; lookupIPAndMessageFromIndexStatsKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -1129,7 +1129,7 @@ count:long | env:keyword | type:keyword ; statsAndLookupIPAndMessageFromIndex -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -1148,7 +1148,7 @@ count:long | client_ip:keyword | message:keyword | env:keyword | type:keyw ; lookupIPAndMessageFromIndexChainedEvalKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -1170,7 +1170,7 @@ ignoreOrder:true ; lookupIPAndMessageFromIndexChainedRenameKeep -required_capability: join_lookup_v9 +required_capability: join_lookup_v10 FROM sample_data | EVAL client_ip = client_ip::keyword @@ -1190,3 +1190,19 @@ ignoreOrder:true 2023-10-23T12:27:28.948Z | 172.21.2.113 | 2764889 | QA | null 2023-10-23T12:15:03.360Z | 172.21.2.162 | 3450233 | QA | null ; + +lookupIndexInFromRepeatedRowBug +required_capability: join_lookup_v10 +FROM languages_lookup_non_unique_key +| WHERE language_code == 1 +| LOOKUP JOIN languages_lookup ON language_code +| KEEP language_code, language_name, country +| SORT language_code, language_name, country +; + +language_code:integer | language_name:keyword | country:text +1 | English | Canada +1 | English | United Kingdom +1 | English | United States of America +1 | English | null +; diff --git a/x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/action/EsqlCapabilities.java b/x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/action/EsqlCapabilities.java index 22f7937ccf4ff..5c259caa9c940 100644 --- a/x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/action/EsqlCapabilities.java +++ b/x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/action/EsqlCapabilities.java @@ -560,7 +560,7 @@ public enum Cap { /** * LOOKUP JOIN */ - JOIN_LOOKUP_V9(Build.current().isSnapshot()), + JOIN_LOOKUP_V10(Build.current().isSnapshot()), /** * Fix for https://github.com/elastic/elasticsearch/issues/117054 diff --git a/x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/planner/PlannerUtils.java b/x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/planner/PlannerUtils.java index a312d048db0ad..5325145a77ade 100644 --- a/x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/planner/PlannerUtils.java +++ b/x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/planner/PlannerUtils.java @@ -14,7 +14,6 @@ import org.elasticsearch.compute.data.BlockFactory; import org.elasticsearch.compute.data.ElementType; import org.elasticsearch.core.Tuple; -import org.elasticsearch.index.IndexMode; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.SearchExecutionContext; @@ -27,13 +26,13 @@ import org.elasticsearch.xpack.esql.core.type.DataType; import org.elasticsearch.xpack.esql.core.util.Holder; import org.elasticsearch.xpack.esql.core.util.Queries; -import org.elasticsearch.xpack.esql.index.EsIndex; import org.elasticsearch.xpack.esql.optimizer.LocalLogicalOptimizerContext; import org.elasticsearch.xpack.esql.optimizer.LocalLogicalPlanOptimizer; import org.elasticsearch.xpack.esql.optimizer.LocalPhysicalOptimizerContext; import org.elasticsearch.xpack.esql.optimizer.LocalPhysicalPlanOptimizer; import org.elasticsearch.xpack.esql.plan.logical.EsRelation; import org.elasticsearch.xpack.esql.plan.logical.Filter; +import org.elasticsearch.xpack.esql.plan.logical.join.Join; import org.elasticsearch.xpack.esql.plan.physical.AggregateExec; import org.elasticsearch.xpack.esql.plan.physical.EsSourceExec; import org.elasticsearch.xpack.esql.plan.physical.EstimatesRowSize; @@ -110,27 +109,10 @@ public static Set planConcreteIndices(PhysicalPlan plan) { return Set.of(); } var indices = new LinkedHashSet(); - // TODO: This only works for LEFT join, we still need to support RIGHT join - forEachUpWithChildren(plan, node -> { - if (node instanceof FragmentExec f) { - f.fragment().forEachUp(EsRelation.class, r -> indices.addAll(r.index().concreteIndices())); - } - }, node -> node instanceof LookupJoinExec join ? List.of(join.left()) : node.children()); + forEachFromRelation(plan, relation -> indices.addAll(relation.index().concreteIndices())); return indices; } - /** - * Similar to {@link Node#forEachUp(Consumer)}, but with a custom callback to get the node children. - */ - private static > void forEachUpWithChildren( - T node, - Consumer action, - Function> childrenGetter - ) { - childrenGetter.apply(node).forEach(c -> forEachUpWithChildren(c, action, childrenGetter)); - action.accept(node); - } - /** * Returns the original indices specified in the FROM command of the query. We need the original query to resolve alias filters. */ @@ -139,16 +121,41 @@ public static String[] planOriginalIndices(PhysicalPlan plan) { return Strings.EMPTY_ARRAY; } var indices = new LinkedHashSet(); - plan.forEachUp( - FragmentExec.class, - f -> f.fragment().forEachUp(EsRelation.class, r -> addOriginalIndexIfNotLookup(indices, r.index())) - ); + forEachFromRelation(plan, relation -> indices.addAll(asList(Strings.commaDelimitedListToStringArray(relation.index().name())))); return indices.toArray(String[]::new); } - private static void addOriginalIndexIfNotLookup(Set indices, EsIndex index) { - if (index.indexNameWithModes().get(index.name()) != IndexMode.LOOKUP) { - indices.addAll(asList(Strings.commaDelimitedListToStringArray(index.name()))); + /** + * Iterates over the plan and applies the action to each {@link EsRelation} node. + *

+ * This method ignores the right side of joins. + *

+ */ + private static void forEachFromRelation(PhysicalPlan plan, Consumer action) { + // Take the non-join-side fragments + forEachUpWithChildren(plan, FragmentExec.class, fragment -> { + // Take the non-join-side relations + forEachUpWithChildren( + fragment.fragment(), + EsRelation.class, + action, + node -> node instanceof Join join ? List.of(join.left()) : node.children() + ); + }, node -> node instanceof LookupJoinExec join ? List.of(join.left()) : node.children()); + } + + /** + * Similar to {@link Node#forEachUp(Consumer)}, but with a custom callback to get the node children. + */ + private static , E extends T> void forEachUpWithChildren( + T node, + Class typeToken, + Consumer action, + Function> childrenGetter + ) { + childrenGetter.apply(node).forEach(c -> forEachUpWithChildren(c, typeToken, action, childrenGetter)); + if (typeToken.isInstance(node)) { + action.accept(typeToken.cast(node)); } } diff --git a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/CsvTests.java b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/CsvTests.java index 76744957ff5fc..e78d42db11d25 100644 --- a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/CsvTests.java +++ b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/CsvTests.java @@ -263,7 +263,7 @@ public final void test() throws Throwable { ); assumeFalse( "lookup join disabled for csv tests", - testCase.requiredCapabilities.contains(EsqlCapabilities.Cap.JOIN_LOOKUP_V9.capabilityName()) + testCase.requiredCapabilities.contains(EsqlCapabilities.Cap.JOIN_LOOKUP_V10.capabilityName()) ); assumeFalse( "can't use TERM function in csv tests", diff --git a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/AnalyzerTests.java b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/AnalyzerTests.java index be15bb7de8b44..dc4120f357725 100644 --- a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/AnalyzerTests.java +++ b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/AnalyzerTests.java @@ -2140,7 +2140,7 @@ public void testLookupMatchTypeWrong() { } public void testLookupJoinUnknownIndex() { - assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); String errorMessage = "Unknown index [foobar]"; IndexResolution missingLookupIndex = IndexResolution.invalid(errorMessage); @@ -2169,7 +2169,7 @@ public void testLookupJoinUnknownIndex() { } public void testLookupJoinUnknownField() { - assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); String query = "FROM test | LOOKUP JOIN languages_lookup ON last_name"; String errorMessage = "1:45: Unknown column [last_name] in right side of join"; @@ -2192,7 +2192,7 @@ public void testLookupJoinUnknownField() { } public void testMultipleLookupJoinsGiveDifferentAttributes() { - assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); // The field attributes that get contributed by different LOOKUP JOIN commands must have different name ids, // even if they have the same names. Otherwise, things like dependency analysis - like in PruneColumns - cannot work based on @@ -2222,7 +2222,7 @@ public void testMultipleLookupJoinsGiveDifferentAttributes() { } public void testLookupJoinIndexMode() { - assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); var indexResolution = AnalyzerTestUtils.expandedDefaultIndexResolution(); var lookupResolution = AnalyzerTestUtils.defaultLookupResolution(); diff --git a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/ParsingTests.java b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/ParsingTests.java index 2f6cf46f2e2b1..180e32fb7c15d 100644 --- a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/ParsingTests.java +++ b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/ParsingTests.java @@ -113,7 +113,7 @@ public void testTooBigQuery() { } public void testJoinOnConstant() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertEquals( "1:55: JOIN ON clause only supports fields at the moment, found [123]", error("row languages = 1, gender = \"f\" | lookup join test on 123") @@ -129,7 +129,7 @@ public void testJoinOnConstant() { } public void testJoinOnMultipleFields() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertEquals( "1:35: JOIN ON clause only supports one field at the moment, found [2]", error("row languages = 1, gender = \"f\" | lookup join test on gender, languages") @@ -137,7 +137,7 @@ public void testJoinOnMultipleFields() { } public void testJoinTwiceOnTheSameField() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertEquals( "1:35: JOIN ON clause only supports one field at the moment, found [2]", error("row languages = 1, gender = \"f\" | lookup join test on languages, languages") @@ -145,7 +145,7 @@ public void testJoinTwiceOnTheSameField() { } public void testJoinTwiceOnTheSameField_TwoLookups() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertEquals( "1:80: JOIN ON clause only supports one field at the moment, found [2]", error("row languages = 1, gender = \"f\" | lookup join test on languages | eval x = 1 | lookup join test on gender, gender") diff --git a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/VerifierTests.java b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/VerifierTests.java index 533cc59b824ce..fe6d1e00e5d24 100644 --- a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/VerifierTests.java +++ b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/VerifierTests.java @@ -1974,7 +1974,7 @@ public void testSortByAggregate() { } public void testLookupJoinDataTypeMismatch() { - assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("requires LOOKUP JOIN capability", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); query("FROM test | EVAL language_code = languages | LOOKUP JOIN languages_lookup ON language_code"); diff --git a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/optimizer/LogicalPlanOptimizerTests.java b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/optimizer/LogicalPlanOptimizerTests.java index 672eef7076c64..d46572b7c8561 100644 --- a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/optimizer/LogicalPlanOptimizerTests.java +++ b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/optimizer/LogicalPlanOptimizerTests.java @@ -4927,7 +4927,7 @@ public void testPlanSanityCheck() throws Exception { } public void testPlanSanityCheckWithBinaryPlans() throws Exception { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); var plan = optimizedPlan(""" FROM test @@ -6003,7 +6003,7 @@ public void testLookupStats() { * \_EsRelation[languages_lookup][LOOKUP][language_code{f}#18, language_name{f}#19] */ public void testLookupJoinPushDownFilterOnJoinKeyWithRename() { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); String query = """ FROM test @@ -6045,7 +6045,7 @@ public void testLookupJoinPushDownFilterOnJoinKeyWithRename() { * \_EsRelation[languages_lookup][LOOKUP][language_code{f}#18, language_name{f}#19] */ public void testLookupJoinPushDownFilterOnLeftSideField() { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); String query = """ FROM test @@ -6088,7 +6088,7 @@ public void testLookupJoinPushDownFilterOnLeftSideField() { * \_EsRelation[languages_lookup][LOOKUP][language_code{f}#18, language_name{f}#19] */ public void testLookupJoinPushDownDisabledForLookupField() { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); String query = """ FROM test @@ -6132,7 +6132,7 @@ public void testLookupJoinPushDownDisabledForLookupField() { * \_EsRelation[languages_lookup][LOOKUP][language_code{f}#19, language_name{f}#20] */ public void testLookupJoinPushDownSeparatedForConjunctionBetweenLeftAndRightField() { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); String query = """ FROM test @@ -6183,7 +6183,7 @@ public void testLookupJoinPushDownSeparatedForConjunctionBetweenLeftAndRightFiel * \_EsRelation[languages_lookup][LOOKUP][language_code{f}#19, language_name{f}#20] */ public void testLookupJoinPushDownDisabledForDisjunctionBetweenLeftAndRightField() { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); String query = """ FROM test diff --git a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/optimizer/PhysicalPlanOptimizerTests.java b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/optimizer/PhysicalPlanOptimizerTests.java index 80f2772945e93..591ceff7120e8 100644 --- a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/optimizer/PhysicalPlanOptimizerTests.java +++ b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/optimizer/PhysicalPlanOptimizerTests.java @@ -2615,7 +2615,7 @@ public void testVerifierOnMissingReferences() { } public void testVerifierOnMissingReferencesWithBinaryPlans() throws Exception { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); // Do not assert serialization: // This will have a LookupJoinExec, which is not serializable because it doesn't leave the coordinator. @@ -7298,7 +7298,7 @@ public void testLookupThenTopN() { } public void testLookupJoinFieldLoading() throws Exception { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); TestDataSource data = dataSetWithLookupIndices(Map.of("lookup_index", List.of("first_name", "foo", "bar", "baz"))); @@ -7375,7 +7375,7 @@ public void testLookupJoinFieldLoading() throws Exception { } public void testLookupJoinFieldLoadingTwoLookups() throws Exception { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); TestDataSource data = dataSetWithLookupIndices( Map.of( @@ -7429,7 +7429,7 @@ public void testLookupJoinFieldLoadingTwoLookups() throws Exception { @AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/119082") public void testLookupJoinFieldLoadingTwoLookupsProjectInBetween() throws Exception { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); TestDataSource data = dataSetWithLookupIndices( Map.of( @@ -7470,7 +7470,7 @@ public void testLookupJoinFieldLoadingTwoLookupsProjectInBetween() throws Except @AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/118778") public void testLookupJoinFieldLoadingDropAllFields() throws Exception { - assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("Requires LOOKUP JOIN", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); TestDataSource data = dataSetWithLookupIndices(Map.of("lookup_index", List.of("first_name", "foo", "bar", "baz"))); diff --git a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/session/IndexResolverFieldNamesTests.java b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/session/IndexResolverFieldNamesTests.java index b344bd6b63255..4db4f7925d4ff 100644 --- a/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/session/IndexResolverFieldNamesTests.java +++ b/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/session/IndexResolverFieldNamesTests.java @@ -1365,7 +1365,7 @@ public void testMetrics() { } public void testLookupJoin() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( "FROM employees | KEEP languages | RENAME languages AS language_code | LOOKUP JOIN languages_lookup ON language_code", Set.of("languages", "languages.*", "language_code", "language_code.*"), @@ -1374,7 +1374,7 @@ public void testLookupJoin() { } public void testLookupJoinKeep() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM employees @@ -1388,7 +1388,7 @@ public void testLookupJoinKeep() { } public void testLookupJoinKeepWildcard() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM employees @@ -1402,7 +1402,7 @@ public void testLookupJoinKeepWildcard() { } public void testMultiLookupJoin() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data @@ -1415,7 +1415,7 @@ public void testMultiLookupJoin() { } public void testMultiLookupJoinKeepBefore() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data @@ -1429,7 +1429,7 @@ public void testMultiLookupJoinKeepBefore() { } public void testMultiLookupJoinKeepBetween() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data @@ -1454,7 +1454,7 @@ public void testMultiLookupJoinKeepBetween() { } public void testMultiLookupJoinKeepAfter() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data @@ -1481,7 +1481,7 @@ public void testMultiLookupJoinKeepAfter() { } public void testMultiLookupJoinKeepAfterWildcard() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data @@ -1495,7 +1495,7 @@ public void testMultiLookupJoinKeepAfterWildcard() { } public void testMultiLookupJoinSameIndex() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data @@ -1509,7 +1509,7 @@ public void testMultiLookupJoinSameIndex() { } public void testMultiLookupJoinSameIndexKeepBefore() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data @@ -1524,7 +1524,7 @@ public void testMultiLookupJoinSameIndexKeepBefore() { } public void testMultiLookupJoinSameIndexKeepBetween() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data @@ -1550,7 +1550,7 @@ public void testMultiLookupJoinSameIndexKeepBetween() { } public void testMultiLookupJoinSameIndexKeepAfter() { - assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V9.isEnabled()); + assumeTrue("LOOKUP JOIN available as snapshot only", EsqlCapabilities.Cap.JOIN_LOOKUP_V10.isEnabled()); assertFieldNames( """ FROM sample_data diff --git a/x-pack/plugin/identity-provider/src/main/plugin-metadata/entitlement-policy.yaml b/x-pack/plugin/identity-provider/src/main/plugin-metadata/entitlement-policy.yaml new file mode 100644 index 0000000000000..d826de8ca8725 --- /dev/null +++ b/x-pack/plugin/identity-provider/src/main/plugin-metadata/entitlement-policy.yaml @@ -0,0 +1,2 @@ +ALL-UNNAMED: + - set_https_connection_properties # potentially required by apache.httpcomponents diff --git a/x-pack/plugin/inference/src/main/plugin-metadata/entitlement-policy.yaml b/x-pack/plugin/inference/src/main/plugin-metadata/entitlement-policy.yaml new file mode 100644 index 0000000000000..41383d0b6736a --- /dev/null +++ b/x-pack/plugin/inference/src/main/plugin-metadata/entitlement-policy.yaml @@ -0,0 +1,2 @@ +com.google.api.client: + - set_https_connection_properties diff --git a/x-pack/plugin/monitoring/src/main/plugin-metadata/entitlement-policy.yaml b/x-pack/plugin/monitoring/src/main/plugin-metadata/entitlement-policy.yaml new file mode 100644 index 0000000000000..d826de8ca8725 --- /dev/null +++ b/x-pack/plugin/monitoring/src/main/plugin-metadata/entitlement-policy.yaml @@ -0,0 +1,2 @@ +ALL-UNNAMED: + - set_https_connection_properties # potentially required by apache.httpcomponents diff --git a/x-pack/plugin/security/src/main/plugin-metadata/entitlement-policy.yaml b/x-pack/plugin/security/src/main/plugin-metadata/entitlement-policy.yaml new file mode 100644 index 0000000000000..98c6b81553572 --- /dev/null +++ b/x-pack/plugin/security/src/main/plugin-metadata/entitlement-policy.yaml @@ -0,0 +1,2 @@ +org.elasticsearch.security: + - set_https_connection_properties # for CommandLineHttpClient diff --git a/x-pack/plugin/src/yamlRestTest/resources/rest-api-spec/test/esql/190_lookup_join.yml b/x-pack/plugin/src/yamlRestTest/resources/rest-api-spec/test/esql/190_lookup_join.yml index 57d2dac23026b..1567b6b556bdd 100644 --- a/x-pack/plugin/src/yamlRestTest/resources/rest-api-spec/test/esql/190_lookup_join.yml +++ b/x-pack/plugin/src/yamlRestTest/resources/rest-api-spec/test/esql/190_lookup_join.yml @@ -6,7 +6,7 @@ setup: - method: POST path: /_query parameters: [] - capabilities: [join_lookup_v9] + capabilities: [join_lookup_v10] reason: "uses LOOKUP JOIN" - do: indices.create: diff --git a/x-pack/qa/repository-old-versions-compatibility/build.gradle b/x-pack/qa/repository-old-versions-compatibility/build.gradle new file mode 100644 index 0000000000000..37e5eea85a08b --- /dev/null +++ b/x-pack/qa/repository-old-versions-compatibility/build.gradle @@ -0,0 +1,25 @@ +/* + * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one + * or more contributor license agreements. Licensed under the "Elastic License + * 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side + * Public License v 1"; you may not use this file except in compliance with, at + * your election, the "Elastic License 2.0", the "GNU Affero General Public + * License v3.0 only", or the "Server Side Public License, v 1". + */ +apply plugin: 'elasticsearch.internal-java-rest-test' +apply plugin: 'elasticsearch.internal-test-artifact' +apply plugin: 'elasticsearch.bwc-test' + +buildParams.bwcVersions.withLatestReadOnlyIndexCompatible { bwcVersion -> + tasks.named("javaRestTest").configure { + systemProperty("tests.minimum.index.compatible", bwcVersion) + usesBwcDistribution(bwcVersion) + enabled = true + } +} + +tasks.withType(Test).configureEach { + // CI doesn't like it when there's multiple clusters running at once + maxParallelForks = 1 +} + diff --git a/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/AbstractUpgradeCompatibilityTestCase.java b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/AbstractUpgradeCompatibilityTestCase.java new file mode 100644 index 0000000000000..4ff2b80aa29cc --- /dev/null +++ b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/AbstractUpgradeCompatibilityTestCase.java @@ -0,0 +1,211 @@ +/* + * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one + * or more contributor license agreements. Licensed under the Elastic License + * 2.0; you may not use this file except in compliance with the Elastic License + * 2.0. + */ + +package org.elasticsearch.oldrepos; + +import com.carrotsearch.randomizedtesting.TestMethodAndParams; +import com.carrotsearch.randomizedtesting.annotations.Name; +import com.carrotsearch.randomizedtesting.annotations.ParametersFactory; +import com.carrotsearch.randomizedtesting.annotations.TestCaseOrdering; + +import org.apache.http.util.EntityUtils; +import org.elasticsearch.client.Request; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.RestClient; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.repositories.fs.FsRepository; +import org.elasticsearch.test.cluster.ElasticsearchCluster; +import org.elasticsearch.test.cluster.local.LocalClusterConfigProvider; +import org.elasticsearch.test.cluster.local.distribution.DistributionType; +import org.elasticsearch.test.cluster.util.Version; +import org.elasticsearch.test.rest.ESRestTestCase; +import org.junit.Before; +import org.junit.ClassRule; +import org.junit.rules.RuleChain; +import org.junit.rules.TemporaryFolder; +import org.junit.rules.TestRule; + +import java.io.IOException; +import java.io.OutputStream; +import java.net.URISyntaxException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.Paths; +import java.util.Comparator; +import java.util.Objects; +import java.util.stream.Stream; +import java.util.zip.ZipEntry; +import java.util.zip.ZipInputStream; + +import static org.elasticsearch.test.cluster.util.Version.CURRENT; +import static org.elasticsearch.test.cluster.util.Version.fromString; +import static org.elasticsearch.test.rest.ObjectPath.createFromResponse; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.notNullValue; + +@TestCaseOrdering(AbstractUpgradeCompatibilityTestCase.TestCaseOrdering.class) +public abstract class AbstractUpgradeCompatibilityTestCase extends ESRestTestCase { + + protected static final Version VERSION_MINUS_2 = fromString(System.getProperty("tests.minimum.index.compatible")); + protected static final Version VERSION_MINUS_1 = fromString(System.getProperty("tests.minimum.wire.compatible")); + protected static final Version VERSION_CURRENT = CURRENT; + + protected static TemporaryFolder REPOSITORY_PATH = new TemporaryFolder(); + + protected static LocalClusterConfigProvider clusterConfig = c -> {}; + private static ElasticsearchCluster cluster = ElasticsearchCluster.local() + .distribution(DistributionType.DEFAULT) + .version(VERSION_MINUS_1) + .nodes(2) + .setting("xpack.security.enabled", "false") + .setting("xpack.ml.enabled", "false") + .setting("path.repo", () -> REPOSITORY_PATH.getRoot().getPath()) + .apply(() -> clusterConfig) + .build(); + + @ClassRule + public static TestRule ruleChain = RuleChain.outerRule(REPOSITORY_PATH).around(cluster); + + private static boolean upgradeFailed = false; + + private final Version clusterVersion; + + public AbstractUpgradeCompatibilityTestCase(@Name("cluster") Version clusterVersion) { + this.clusterVersion = clusterVersion; + } + + @ParametersFactory + public static Iterable parameters() { + return Stream.of(VERSION_MINUS_1, CURRENT).map(v -> new Object[] { v }).toList(); + } + + @Override + protected String getTestRestCluster() { + return cluster.getHttpAddresses(); + } + + @Override + protected boolean preserveClusterUponCompletion() { + return true; + } + + /** + * This method verifies the currentVersion against the clusterVersion and performs a "full cluster restart" upgrade if the current + * is before clusterVersion. The cluster version is fetched externally and is controlled by the gradle setup. + * + * @throws Exception + */ + @Before + public void maybeUpgrade() throws Exception { + // We want to use this test suite for the V9 upgrade, but we are not fully committed to necessarily having N-2 support + // in V10, so we add a check here to ensure we'll revisit this decision once V10 exists. + assertThat("Explicit check that N-2 version is Elasticsearch 7", VERSION_MINUS_2.getMajor(), equalTo(7)); + + var currentVersion = clusterVersion(); + if (currentVersion.before(clusterVersion)) { + try { + cluster.upgradeToVersion(clusterVersion); + closeClients(); + initClient(); + } catch (Exception e) { + upgradeFailed = true; + throw e; + } + } + + // Skip remaining tests if upgrade failed + assumeFalse("Cluster upgrade failed", upgradeFailed); + } + + protected static Version clusterVersion() throws Exception { + var response = assertOK(client().performRequest(new Request("GET", "/"))); + var responseBody = createFromResponse(response); + var version = Version.fromString(responseBody.evaluate("version.number").toString()); + assertThat("Failed to retrieve cluster version", version, notNullValue()); + return version; + } + + /** + * Execute the test suite with the parameters provided by the {@link #parameters()} in version order. + */ + public static class TestCaseOrdering implements Comparator { + @Override + public int compare(TestMethodAndParams o1, TestMethodAndParams o2) { + var version1 = (Version) o1.getInstanceArguments().get(0); + var version2 = (Version) o2.getInstanceArguments().get(0); + return version1.compareTo(version2); + } + } + + public final void verifyCompatibility(String version) throws Exception { + final String repository = "repository"; + final String snapshot = "snapshot"; + final String index = "index"; + final int numDocs = 5; + + String repositoryPath = REPOSITORY_PATH.getRoot().getPath(); + + if (VERSION_MINUS_1.equals(clusterVersion())) { + assertEquals(VERSION_MINUS_1, clusterVersion()); + assertTrue(getIndices(client()).isEmpty()); + + // Copy a snapshot of an index with 5 documents + copySnapshotFromResources(repositoryPath, version); + registerRepository(client(), repository, FsRepository.TYPE, true, Settings.builder().put("location", repositoryPath).build()); + recover(client(), repository, snapshot, index); + + assertTrue(getIndices(client()).contains(index)); + assertDocCount(client(), index, numDocs); + + return; + } + + if (VERSION_CURRENT.equals(clusterVersion())) { + assertEquals(VERSION_CURRENT, clusterVersion()); + assertTrue(getIndices(client()).contains(index)); + assertDocCount(client(), index, numDocs); + } + } + + public abstract void recover(RestClient restClient, String repository, String snapshot, String index) throws Exception; + + private static String getIndices(RestClient client) throws IOException { + final Request request = new Request("GET", "_cat/indices"); + Response response = client.performRequest(request); + return EntityUtils.toString(response.getEntity()); + } + + private static void copySnapshotFromResources(String repositoryPath, String version) throws IOException, URISyntaxException { + Path zipFilePath = Paths.get( + Objects.requireNonNull(AbstractUpgradeCompatibilityTestCase.class.getClassLoader().getResource("snapshot_v" + version + ".zip")) + .toURI() + ); + unzip(zipFilePath, Paths.get(repositoryPath)); + } + + private static void unzip(Path zipFilePath, Path outputDir) throws IOException { + try (ZipInputStream zipIn = new ZipInputStream(Files.newInputStream(zipFilePath))) { + ZipEntry entry; + while ((entry = zipIn.getNextEntry()) != null) { + Path outputPath = outputDir.resolve(entry.getName()); + if (entry.isDirectory()) { + Files.createDirectories(outputPath); + } else { + Files.createDirectories(outputPath.getParent()); + try (OutputStream out = Files.newOutputStream(outputPath)) { + byte[] buffer = new byte[1024]; + int len; + while ((len = zipIn.read(buffer)) > 0) { + out.write(buffer, 0, len); + } + } + } + zipIn.closeEntry(); + } + } + } +} diff --git a/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/ArchiveIndexTestCase.java b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/ArchiveIndexTestCase.java new file mode 100644 index 0000000000000..17bdb76e0eae5 --- /dev/null +++ b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/ArchiveIndexTestCase.java @@ -0,0 +1,54 @@ +/* + * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one + * or more contributor license agreements. Licensed under the Elastic License + * 2.0; you may not use this file except in compliance with the Elastic License + * 2.0. + */ + +package org.elasticsearch.oldrepos.archiveindex; + +import org.elasticsearch.client.Request; +import org.elasticsearch.client.RestClient; +import org.elasticsearch.common.Strings; +import org.elasticsearch.oldrepos.AbstractUpgradeCompatibilityTestCase; +import org.elasticsearch.test.cluster.util.Version; + +import static org.elasticsearch.test.rest.ObjectPath.createFromResponse; + +/** + * Test suite for Archive indices backward compatibility with N-2 versions. + * The test suite creates a cluster in the N-1 version, where N is the current version. + * Restores snapshots from old-clusters (version 5/6) and upgrades it to the current version. + * Test methods are executed after each upgrade. + * + * For example the test suite creates a cluster of version 8, then restores a snapshot of an index created + * when deployed ES version 5/6. The cluster then upgrades to version 9, verifying that the archive index + * is successfully restored. + */ +public class ArchiveIndexTestCase extends AbstractUpgradeCompatibilityTestCase { + + static { + clusterConfig = config -> config.setting("xpack.license.self_generated.type", "trial"); + } + + public ArchiveIndexTestCase(Version version) { + super(version); + } + + /** + * Overrides the snapshot-restore operation for archive-indices scenario. + */ + @Override + public void recover(RestClient client, String repository, String snapshot, String index) throws Exception { + var request = new Request("POST", "/_snapshot/" + repository + "/" + snapshot + "/_restore"); + request.addParameter("wait_for_completion", "true"); + request.setJsonEntity(Strings.format(""" + { + "indices": "%s", + "include_global_state": false, + "rename_pattern": "(.+)", + "include_aliases": false + }""", index)); + createFromResponse(client.performRequest(request)); + } +} diff --git a/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/RestoreFromVersion5IT.java b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/RestoreFromVersion5IT.java new file mode 100644 index 0000000000000..9f62d65592a37 --- /dev/null +++ b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/RestoreFromVersion5IT.java @@ -0,0 +1,21 @@ +/* + * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one + * or more contributor license agreements. Licensed under the Elastic License + * 2.0; you may not use this file except in compliance with the Elastic License + * 2.0. + */ + +package org.elasticsearch.oldrepos.archiveindex; + +import org.elasticsearch.test.cluster.util.Version; + +public class RestoreFromVersion5IT extends ArchiveIndexTestCase { + + public RestoreFromVersion5IT(Version version) { + super(version); + } + + public void testArchiveIndex() throws Exception { + verifyCompatibility("5"); + } +} diff --git a/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/RestoreFromVersion6IT.java b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/RestoreFromVersion6IT.java new file mode 100644 index 0000000000000..b3cca45c205f6 --- /dev/null +++ b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/java/org/elasticsearch/oldrepos/archiveindex/RestoreFromVersion6IT.java @@ -0,0 +1,21 @@ +/* + * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one + * or more contributor license agreements. Licensed under the Elastic License + * 2.0; you may not use this file except in compliance with the Elastic License + * 2.0. + */ + +package org.elasticsearch.oldrepos.archiveindex; + +import org.elasticsearch.test.cluster.util.Version; + +public class RestoreFromVersion6IT extends ArchiveIndexTestCase { + + public RestoreFromVersion6IT(Version version) { + super(version); + } + + public void testArchiveIndex() throws Exception { + verifyCompatibility("6"); + } +} diff --git a/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/README.md b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/README.md new file mode 100644 index 0000000000000..c937448e97236 --- /dev/null +++ b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/README.md @@ -0,0 +1,147 @@ + +### Create data structure and config file +``` +mkdir /tmp/sharedESData +mkdir /tmp/sharedESData/config +mkdir /tmp/sharedESData/data +mkdir /tmp/sharedESData/snapshots +``` + +``` +touch /tmp/sharedESData/config/elasticsearch.yml + +cat <> /tmp/sharedESData/config/elasticsearch.yml +cluster.name: "archive-indides-test" +node.name: "node-1" +path.repo: ["/usr/share/elasticsearch/snapshots"] +network.host: 0.0.0.0 +http.port: 9200 + +discovery.type: single-node +xpack.security.enabled: false +EOF +``` + +### Define path +``` +SHARED_FOLDER=/tmp/sharedESData +``` + +### Deploy container +``` +docker run -d --name es \ +-p 9200:9200 -p 9300:9300 \ +-v ${SHARED_FOLDER}/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ +-v ${SHARED_FOLDER}/data:/usr/share/elasticsearch/data \ +-v ${SHARED_FOLDER}/snapshots:/usr/share/elasticsearch/snapshots \ +--env "discovery.type=single-node" \ +docker.elastic.co/elasticsearch/elasticsearch:5.6.16 + +// Version 6 +docker.elastic.co/elasticsearch/elasticsearch:6.8.23 +``` + +### Create Index Version 5 +``` +PUT /index +{ + "settings": { + "number_of_shards": 1, + "number_of_replicas": 1 + }, + "mappings": { + "my_type": { + "properties": { + "title": { + "type": "text" + }, + "created_at": { + "type": "date" + }, + "views": { + "type": "integer" + } + } + } + } +} +``` + +### Create Index Version 6 +``` +PUT /index +{ + "settings": { + "number_of_shards": 1, + "number_of_replicas": 1 + }, + "mappings": { + "_doc": { + "properties": { + "title": { + "type": "text" + }, + "content": { + "type": "text" + }, + "created_at": { + "type": "date" + } + } + } + } +} +``` + +### Add documents Version 5 +``` +POST /index/my_type +{ + "title": "Title 5", + "content": "Elasticsearch is a powerful search engine.", + "created_at": "2024-12-16" +} +``` + +### Add documents Version 6 +``` +POST /index/_doc +{ + "title": "Title 5", + "content": "Elasticsearch is a powerful search engine.", + "created_at": "2024-12-16" +} +``` + +### Register repository +``` +PUT /_snapshot/repository +{ + "type": "fs", + "settings": { + "location": "/usr/share/elasticsearch/snapshots", + "compress": true + } +} +``` + +### Create a snapshot +``` +PUT /_snapshot/repository/snapshot +{ + "indices": "index", + "ignore_unavailable": "true", + "include_global_state": false +} +``` + +### Create zip file +``` +zip -r snapshot.zip /tmp/sharedESData/snapshots/* +``` + +### Cleanup +``` +docker rm -f es +rm -rf /tmp/sharedESData/ +``` diff --git a/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/snapshot_v5.zip b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/snapshot_v5.zip new file mode 100644 index 0000000000000..54dcf4f6182cc Binary files /dev/null and b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/snapshot_v5.zip differ diff --git a/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/snapshot_v6.zip b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/snapshot_v6.zip new file mode 100644 index 0000000000000..d83152fb71c62 Binary files /dev/null and b/x-pack/qa/repository-old-versions-compatibility/src/javaRestTest/resources/snapshot_v6.zip differ