Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update SSOFT generation #880

Merged
merged 10 commits into from
Oct 9, 2024
5 changes: 3 additions & 2 deletions bin/generate_ssoft.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
from fink_broker.logging_utils import get_fink_logger, inspect_application
from fink_broker.spark_utils import init_sparksession

from fink_spins.ssoft import build_the_ssoft
from fink_science.ssoft.processor import build_the_ssoft


def main():
Expand Down Expand Up @@ -81,7 +81,7 @@ def main():

# Initialise Spark session
spark = init_sparksession(
name="ssoft_{}_{}".format(args.model, version), shuffle_partitions=20
name="ssoft_{}_{}".format(args.model, version), shuffle_partitions=200
)

# The level here should be controlled by an argument.
Expand All @@ -106,6 +106,7 @@ def main():
frac=args.frac,
model=args.model,
version=version,
sb_method="fastnifty",
)

pdf.to_parquet("ssoft_{}_{}.parquet".format(args.model, version))
Expand Down
2 changes: 1 addition & 1 deletion bin/raw2science_batch.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def main():
df = df.filter(df["candidate.nbad"] == 0).filter(df["candidate.rb"] >= 0.55)

# Apply science modules
df = apply_science_modules(df, logger)
df = apply_science_modules(df)

# Add tracklet information
df_trck = spark.read.format("parquet").load(input_raw)
Expand Down
Loading