-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share A Single File System Instance In HadoopTableOperations #92
Comments
Can you give an example of properties you're trying to set here? We can cache file systems in HadoopTableOperations, but most of the systems I've worked on use this pattern of getting the right file system for the URI and using the FileSystem level cache. |
We are experimenting with using Iceberg as a temporary representation of the tables that are backed by our internal data warehouse solution. When we do so, however, we need to put the Iceberg table metadata somewhere. We want to put it on local disk, but when we put it on local disk we need to encrypt it with a one-time encryption key that only exists for the lifetime of the Spark dataset that is being read / written; So for example we're doing something like this:
In such a case, we don't want the same file system instance - probably a local FS instance wrapped with some encryption layer - to be cached, because every time we run this code we want a different encryption key every time. |
Okay, how about adding the support you're talking about to HadoopTableOperations and opening a PR? That would unblock you because you'd have the caching level you need and we could further evaluate the feature. |
Also, why do all of the properties include "spark"? |
The properties here assume being injected into |
Properties set through Spark wouldn't need to be specific to Spark. You might use the same ones as session properties in Presto. |
We shouldn't use
Util.getFS
every time we want aFileSystem
object inHadoopTableOperations
. An example of where this breaks down is if file system object caching is disabled (setfs.<scheme>.impl.disable.cache
). When such caching is disabled, a long string of calls onHadoopTableOperations
in quick succession will create and GCFileSystem
objects very quickly, leading to degraded JVM behavior.An example of where one would want to disable file system caching is so that different instances of
HadoopTableOperations
can be set up withFileSystem
objects that are configured with differentConfiguration
objects - for example, configuring different Hadoop properties when invoking the data source in various iterations, given that we move forward with #91. Unfortunately, Hadoop caches file system objects byURI
, notConfiguration
, so if one wants differentHadoopTableOperations
instances to load differently configured file system objects with the sameURI
, they will instead receive the sameFileSystem
object back every time, unless they disableFileSystem
caching.The text was updated successfully, but these errors were encountered: