Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8.11.1 #526

Merged
merged 73 commits into from
Oct 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
df50b8b
update pip
bimac Sep 29, 2023
d79667f
add method for checking availability of internet connection
bimac Oct 1, 2023
27f7f64
move update check to separate thread
bimac Oct 1, 2023
673bb38
version number / changelog
bimac Oct 1, 2023
0a2a879
transfer_data CLI with argparse
bimac Oct 2, 2023
e49d760
Update CHANGELOG.md
bimac Oct 2, 2023
903a84e
prepare thread-worker for fetching subject details
bimac Oct 3, 2023
2770033
fix transfer test using local lab: use mock function instead of settings
oliche Oct 3, 2023
0656c6f
get rid of rsync-path
oliche Oct 3, 2023
a4068ea
add md5 check to copy_folders
bimac Oct 3, 2023
ddf6f16
doc-strings & type-hints
bimac Oct 3, 2023
c743165
Merge branch 'iblrigv8' into iblrigv8dev
bimac Oct 3, 2023
0e4de21
reorganize handling of CLI transfer scripts
bimac Oct 4, 2023
cee405f
documentation
bimac Oct 4, 2023
872decf
Update commands.py
bimac Oct 4, 2023
32301d8
Update commands.py
bimac Oct 4, 2023
b611b15
Update commands.py
bimac Oct 4, 2023
b2d6d31
Update commands.py
bimac Oct 4, 2023
8416791
Update commands.py
bimac Oct 4, 2023
d16ccab
Update commands.py
bimac Oct 4, 2023
73e75bf
more verbose copy script
bimac Oct 4, 2023
07c03c5
use ibllib @master
bimac Oct 4, 2023
f6fc00b
Update transfer_experiments.py
bimac Oct 4, 2023
adea831
MD5 -> BLAKE2B
bimac Oct 4, 2023
886d741
Update commands.py
bimac Oct 4, 2023
a050a8f
Update transfer_experiments.py
bimac Oct 4, 2023
eccac51
Update wizard.py
bimac Oct 4, 2023
e6f4251
Update wizard.py
bimac Oct 4, 2023
5bbb2f4
Update wizard.py
bimac Oct 4, 2023
57e5879
add dud detection & update changelog
bimac Oct 4, 2023
f35d33f
prevent windows from going to sleep
bimac Oct 5, 2023
a3bc504
add doc-strings and type-hints
bimac Oct 5, 2023
81e9081
add docstrings for AdvancedCW
oliche Oct 5, 2023
58f30fd
refactoring
bimac Oct 5, 2023
6823de7
offer deletion of dud sessions (working)
bimac Oct 6, 2023
63d7b2d
add user input to transfer CLI
bimac Oct 6, 2023
a0b4130
refactoring copiers
bimac Oct 6, 2023
48bf0e0
use blake2b hashing from iblutil
bimac Oct 10, 2023
dc3c8d2
track discussion on parameters / settings for this task
oliche Oct 10, 2023
6db7494
pip reset --hard? oh my ...
bimac Oct 10, 2023
c887213
Update pyproject.toml
bimac Oct 10, 2023
145bf85
Update CHANGELOG.md
bimac Oct 10, 2023
ff909e3
hierarchical parameters - adaptive gain
oliche Oct 10, 2023
057dad9
add some parameters
oliche Oct 10, 2023
42ac3e3
Update pyproject.toml
bimac Oct 11, 2023
fcbe41d
add probability_left
bimac Oct 11, 2023
3a54967
more parsers for advancedChoiceWorld
bimac Oct 11, 2023
770e4f6
repair resizing on linux
bimac Oct 11, 2023
c0aa20a
cleanup of QT UIs
bimac Oct 11, 2023
3f7caf8
Update CHANGELOG.md
bimac Oct 11, 2023
9841535
cleanup of GUI, doc-strings
bimac Oct 12, 2023
12f3a16
Update wizard.py
bimac Oct 12, 2023
e641483
prepare AnyDeskWorker
bimac Oct 12, 2023
225b58e
reorganize multi-threading / refactoring
bimac Oct 13, 2023
3339f14
add control of status LED
bimac Oct 13, 2023
7c285f0
disable LEDs on start of iblrig
bimac Oct 13, 2023
52dc577
add control for LED
bimac Oct 13, 2023
ab8705f
updated installation instructions
bimac Oct 14, 2023
876a4e4
minor corrections
bimac Oct 14, 2023
3e8bebd
bug report form
bimac Oct 14, 2023
9ae7f9b
skip initialization of existing singleton
bimac Oct 16, 2023
62ac66d
remember state of singleton & GUI position
bimac Oct 16, 2023
8bdc016
Merge branch 'status_led' into iblrigv8dev
bimac Oct 16, 2023
85f5ad1
update icons
bimac Oct 16, 2023
3d5c352
Merge branch 'iblrigv8dev' into advanced_cw
bimac Oct 16, 2023
337d3d4
correct entry-points and add respective test-case
bimac Oct 17, 2023
3ba5886
post-merge clean-up
bimac Oct 16, 2023
4015bf3
Update hardware.py
bimac Oct 17, 2023
f2d00bd
add pydantic
bimac Oct 17, 2023
f711304
clean-up
bimac Oct 19, 2023
d50b6ad
Merge branch 'iblrigv8dev' into advanced_cw
bimac Oct 19, 2023
69c137a
8.11.1
bimac Oct 19, 2023
b348d9f
Merge branch 'iblrigv8' into iblrigv8dev
bimac Oct 19, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ repos:
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.10.0
hooks:
- id: python-check-blanket-type-ignore
- id: python-use-type-annotations
- id: python-no-log-warn
- id: text-unicode-replacement-char
Expand Down
10 changes: 7 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,19 @@ Changelog

-------------------------------

8.11.1
------
* add GUI options for AdvancedChoiceWorld

8.11.0
------
* add check for availability of internet
* add proper CLI for data transfer scripts
* add option for disabling Bpod status LED
* add control for disabling Bpod status LED
* skip initialization of existing Bpod singleton
* remember settings for status LED and GUI position
* move update-check to separate thread
* detect dud (less than 42 trials) and offer deletion
* move update-check to separate thread
* detect duds (less than 42 trials) and offer deletion
* various small bugfixes

8.10.2
Expand Down
2 changes: 1 addition & 1 deletion iblrig/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# 3) Check CI and eventually wet lab test
# 4) Pull request to iblrigv8
# 5) git tag the release in accordance to the version number below (after merge!)
__version__ = '8.10.3'
__version__ = '8.11.1'

# The following method call will try to get post-release information (i.e. the number of commits since the last tagged
# release corresponding to the one above), plus information about the state of the local repository (dirty/broken)
Expand Down
5 changes: 5 additions & 0 deletions iblrig/base_biased_choice_world_params.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
'BLOCK_INIT_5050': true
'BLOCK_LEN_FACTOR': 60
'BLOCK_LEN_MAX': 100
'BLOCK_LEN_MIN': 20
'BLOCK_PROBABILITY_SET': [0.2, 0.8]
77 changes: 67 additions & 10 deletions iblrig/base_choice_world.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,11 @@
from string import ascii_letters
import subprocess
import time
from typing import Literal, Annotated

import numpy as np
import pandas as pd
from pydantic import BaseModel, Field

from pybpodapi.protocol import StateMachine
from pybpodapi.com.messaging.trial import Trial
Expand All @@ -32,6 +34,49 @@
NTRIALS_INIT = 2000
NBLOCKS_INIT = 100

Probability = Annotated[float, Field(ge=0.0, le=1.0)]


class ChoiceWorldParams(BaseModel):
AUTOMATIC_CALIBRATION: bool = True
ADAPTIVE_REWARD: bool = False
BONSAI_EDITOR: bool = False
CALIBRATION_VALUE: float = 0.067
CONTRAST_SET: list[Probability] = Field([1.0, 0.25, 0.125, 0.0625, 0.0], min_items=1)
CONTRAST_SET_PROBABILITY_TYPE: Literal["uniform", "skew_zero"] = 'uniform'
GO_TONE_AMPLITUDE: float = 0.0272
GO_TONE_DURATION: float = 0.11
GO_TONE_IDX: int = Field(2, ge=0)
GO_TONE_FREQUENCY: float = Field(5000, gt=0)
FEEDBACK_CORRECT_DELAY_SECS: float = 1
FEEDBACK_ERROR_DELAY_SECS: float = 2
FEEDBACK_NOGO_DELAY_SECS: float = 2
INTERACTIVE_DELAY: float = 0.0
ITI_DELAY_SECS: float = 1
NTRIALS: int = Field(2000, gt=0)
POOP_COUNT: bool = True
PROBABILITY_LEFT: Probability = 0.5
QUIESCENCE_THRESHOLDS: list[float] = Field(default=[-2, 2], min_length=2, max_length=2)
QUIESCENT_PERIOD: float = 0.2
RECORD_AMBIENT_SENSOR_DATA: bool = True
RECORD_SOUND: bool = True
RESPONSE_WINDOW: float = 60
REWARD_AMOUNT_UL: float = 1.5
REWARD_TYPE: str = 'Water 10% Sucrose'
STIM_ANGLE: float = 0.0
STIM_FREQ: float = 0.1
STIM_GAIN: float = 4.0 # wheel to stimulus relationship (degrees visual angle per mm of wheel displacement)
STIM_POSITIONS: list[float] = [-35, 35]
STIM_SIGMA: float = 7.0
STIM_TRANSLATION_Z: Literal[7, 8] = 7 # 7 for ephys, 8 otherwise. -p:Stim.TranslationZ-{STIM_TRANSLATION_Z} bonsai parameter
SYNC_SQUARE_X: float = 1.33
SYNC_SQUARE_Y: float = -1.03
USE_AUTOMATIC_STOPPING_CRITERIONS: bool = True
VISUAL_STIMULUS: str = 'GaborIBLTask / Gabor2D.bonsai' # null / passiveChoiceWorld_passive.bonsai
WHITE_NOISE_AMPLITUDE: float = 0.05
WHITE_NOISE_DURATION: float = 0.5
WHITE_NOISE_IDX: int = 3


class ChoiceWorldSession(
iblrig.base_tasks.BaseSession,
Expand All @@ -43,6 +88,7 @@ class ChoiceWorldSession(
iblrig.base_tasks.SoundMixin,
iblrig.base_tasks.ValveMixin,
):
# task_params = ChoiceWorldParams()
base_parameters_file = Path(__file__).parent.joinpath('base_choice_world_params.yaml')

def __init__(self, *args, delay_secs=0, **kwargs):
Expand Down Expand Up @@ -639,6 +685,7 @@ class BiasedChoiceWorldSession(ActiveChoiceWorldSession):
Biased choice world session is the instantiation of ActiveChoiceWorld where the notion of biased
blocks is introduced.
"""
base_parameters_file = Path(__file__).parent.joinpath('base_biased_choice_world_params.yaml')
protocol_name = "_iblrig_tasks_biasedChoiceWorld"

def __init__(self, **kwargs):
Expand Down Expand Up @@ -713,9 +760,9 @@ class TrainingChoiceWorldSession(ActiveChoiceWorldSession):
"""
protocol_name = "_iblrig_tasks_trainingChoiceWorld"

def __init__(self, training_phase=-1, adaptive_reward=-1.0, **kwargs):
def __init__(self, training_phase=-1, adaptive_reward=-1.0, adaptive_gain=None, **kwargs):
super(TrainingChoiceWorldSession, self).__init__(**kwargs)
inferred_training_phase, inferred_adaptive_reward = self.get_subject_training_info()
inferred_training_phase, inferred_adaptive_reward, inferred_adaptive_gain = self.get_subject_training_info()
if training_phase == -1:
self.logger.critical(f"Got training phase: {inferred_training_phase}")
self.training_phase = inferred_training_phase
Expand All @@ -728,6 +775,12 @@ def __init__(self, training_phase=-1, adaptive_reward=-1.0, **kwargs):
else:
self.logger.critical(f"Adaptive reward manually set to {adaptive_reward} uL")
self.session_info["ADAPTIVE_REWARD_AMOUNT_UL"] = adaptive_reward
if adaptive_gain is None:
self.logger.critical(f"Got Adaptive gain {inferred_adaptive_gain} degrees/mm")
self.session_info["ADAPTIVE_GAIN_VALUE"] = inferred_adaptive_gain
else:
self.logger.critical(f"Adaptive gain manually set to {adaptive_gain} degrees/mm")
self.session_info["ADAPTIVE_GAIN_VALUE"] = adaptive_gain
self.var = {
"training_phase_trial_counts": np.zeros(6),
"last_10_responses_sides": np.zeros(10),
Expand All @@ -742,25 +795,29 @@ def reward_amount(self):
def get_subject_training_info(self):
"""
Get the previous session's according to this session parameters and deduce the
training level and adaptive reward amount.
training level, adaptive reward amount and adaptive gain value
:return:
"""
try:
training_phase, adaptive_reward, _ = choiceworld.get_subject_training_info(
tinfo, _ = choiceworld.get_subject_training_info(
subject_name=self.session_info.SUBJECT_NAME,
default_reward=self.task_params.REWARD_AMOUNT_UL,
stim_gain=self.task_params.STIM_GAIN,
local_path=self.iblrig_settings['iblrig_local_data_path'],
remote_path=self.iblrig_settings['iblrig_remote_data_path'],
lab=self.iblrig_settings['ALYX_LAB'],
task_name=self.protocol_name,
)
except Exception:
self.logger.critical('Failed to get training information from previous subjects: %s', traceback.format_exc())
training_phase, adaptive_reward = (
iblrig.choiceworld.DEFAULT_TRAINING_PHASE, iblrig.choiceworld.DEFAULT_REWARD_VOLUME)
self.logger.critical(f'The mouse will train on level {training_phase} and with reward {adaptive_reward} uL')

return training_phase, adaptive_reward
tinfo = dict(
training_phase=iblrig.choiceworld.DEFAULT_TRAINING_PHASE,
adaptive_reward=iblrig.choiceworld.DEFAULT_REWARD_VOLUME,
adaptive_gain=self.task_params.AG_INIT_VALUE
)
self.logger.critical(f"The mouse will train on level {tinfo['training_phase']}, "
f"with reward {tinfo['adaptive_reward']} uL and gain {tinfo['adaptive_gain']}")
return tinfo['training_phase'], tinfo['adaptive_reward'], tinfo['adaptive_gain']

def compute_performance(self):
"""
Expand Down Expand Up @@ -823,7 +880,7 @@ def next_trial(self):
# contrast is the last contrast
contrast = last_contrast
# save and send trial info to bonsai
self.draw_next_trial_info(pleft=0.5, position=position, contrast=contrast)
self.draw_next_trial_info(pleft=self.task_params.PROBABILITY_LEFT, position=position, contrast=contrast)
self.trials_table.at[self.trial_num, 'training_phase'] = self.training_phase

def show_trial_log(self):
Expand Down
9 changes: 2 additions & 7 deletions iblrig/base_choice_world_params.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,5 @@
'AUTOMATIC_CALIBRATION': true
'ADAPTIVE_REWARD': false
'BLOCK_INIT_5050': true
'BLOCK_LEN_FACTOR': 60
'BLOCK_LEN_MAX': 100
'BLOCK_LEN_MIN': 20
'BLOCK_PROBABILITY_SET': [0.2, 0.8]
'BONSAI_EDITOR': false
'CALIBRATION_VALUE': 0.067
'CONTRAST_SET': [1.0, 0.25, 0.125, 0.0625, 0.0]
Expand All @@ -20,17 +15,17 @@
'ITI_DELAY_SECS': 1
'NTRIALS': 2000
'POOP_COUNT': true
'PROBABILITY_LEFT': 0.5
'QUIESCENCE_THRESHOLDS': [-2, 2]
'QUIESCENT_PERIOD': 0.2
'RECORD_AMBIENT_SENSOR_DATA': true
'RECORD_SOUND': true
'REPEAT_ON_ERROR': false
'RESPONSE_WINDOW': 60
'REWARD_AMOUNT_UL': 1.5
'REWARD_TYPE': Water 10% Sucrose
'STIM_ANGLE': 0.0
'STIM_FREQ': 0.1
'STIM_GAIN': 4.0
'STIM_GAIN': 4.0 # wheel to stimulus relationship (degrees visual angle per mm of wheel displacement)
'STIM_POSITIONS': [-35, 35]
'STIM_SIGMA': 7.0
'STIM_TRANSLATION_Z': 7 # 7 for ephys, 8 otherwise. -p:Stim.TranslationZ-{STIM_TRANSLATION_Z} bonsai parameter
Expand Down
40 changes: 23 additions & 17 deletions iblrig/base_tasks.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
import inspect
import json
import os
from typing import Optional

import serial
import subprocess
import time
Expand Down Expand Up @@ -47,8 +49,8 @@

class BaseSession(ABC):
version = None
protocol_name = None
base_parameters_file = None
protocol_name: Optional[str] = None
base_parameters_file: Optional[Path] = None
is_mock = False
extractor_tasks = None
checked_for_update = False
Expand Down Expand Up @@ -89,23 +91,27 @@ def __init__(self, subject=None, task_parameter_file=None, file_hardware_setting
self.iblrig_settings = iblrig.path_helper.load_settings_yaml(file_iblrig_settings or 'iblrig_settings.yaml')
if iblrig_settings is not None:
self.iblrig_settings.update(iblrig_settings)
self.wizard = wizard
# Load the tasks settings, from the task folder or override with the input argument
task_parameter_file = task_parameter_file or Path(inspect.getfile(self.__class__)).parent.joinpath('task_parameters.yaml')
base_parameters_files = [
task_parameter_file or Path(inspect.getfile(self.__class__)).parent.joinpath('task_parameters.yaml')]
# loop through the task hierarchy to gather parameter files
for cls in self.__class__.__mro__:
base_file = getattr(cls, 'base_parameters_file', None)
if base_file is not None:
base_parameters_files.append(base_file)
# this is a trick to remove list duplicates while preserving order, we want the highest order first
base_parameters_files = list(reversed(list(dict.fromkeys(base_parameters_files))))
# now we loop into the files and update the dictionary, the latest files in the hierarchy have precedence
self.task_params = Bunch({})
self.wizard = wizard

# first loads the base parameters for a given task
if self.base_parameters_file is not None and self.base_parameters_file.exists():
with open(self.base_parameters_file) as fp:
self.task_params = Bunch(yaml.safe_load(fp))

# then updates the dictionary with the child task parameters
if task_parameter_file.exists():
with open(task_parameter_file) as fp:
task_params = yaml.safe_load(fp)
if task_params is not None:
self.task_params.update(Bunch(task_params))

for param_file in base_parameters_files:
if Path(param_file).exists():
with open(param_file) as fp:
params = yaml.safe_load(fp)
if params is not None:
self.task_params.update(Bunch(params))
# at last sort the dictionary so itś easier for a human to navigate the many keys
self.task_params = Bunch(dict(sorted(self.task_params.items())))
self.session_info = Bunch({
'NTRIALS': 0,
'NTRIALS_CORRECT': 0,
Expand Down
17 changes: 12 additions & 5 deletions iblrig/choiceworld.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,8 @@ def compute_adaptive_reward_volume(subject_weight_g, reward_volume_ul, delivered


def get_subject_training_info(
subject_name, task_name='_iblrig_tasks_trainingChoiceWorld',
default_reward=DEFAULT_REWARD_VOLUME, mode='silent', **kwargs) -> tuple[int, float, dict]:
subject_name, task_name='_iblrig_tasks_trainingChoiceWorld', stim_gain=None,
default_reward=DEFAULT_REWARD_VOLUME, mode='silent', **kwargs) -> tuple[dict, dict]:
"""
Goes through the history of a subject and gets the latest
training phase and the adaptive reward volume for this subject
Expand All @@ -50,31 +50,38 @@ def get_subject_training_info(
:param mode: 'defaults' or 'raise': if 'defaults' returns default values if no history is found, if 'raise' raises ValueError
:param **kwargs: optional arguments to be passed to iblrig.path_helper.get_local_and_remote_paths
if not used, will use the arguments from iblrig/settings/iblrig_settings.yaml
:return: training_phase (int), default_reward uL (float between 1.5 and 3) and a
:return: training_info dictionary with keys:
default_reward uL (float between 1.5 and 3) and a
session_info dictionary with keys: session_path, experiment_description, task_settings, file_task_data
"""
session_info = iterate_previous_sessions(subject_name, task_name=task_name, n=1, **kwargs)
if len(session_info) == 0:
if mode == 'silent':
logger.warning("The training status could not be determined returning default values")
return DEFAULT_TRAINING_PHASE, default_reward, None
return dict(training_phase=DEFAULT_TRAINING_PHASE, adaptive_reward=default_reward, adaptive_gain=stim_gain), None
elif mode == 'raise':
raise ValueError("The training status could not be determined as no previous sessions were found")
else:
session_info = session_info[0]
trials_data, _ = iblrig.raw_data_loaders.load_task_jsonable(session_info.file_task_data)
# gets the reward volume from the previous session
previous_reward_volume = (session_info.task_settings.get('ADAPTIVE_REWARD_AMOUNT_UL') or
session_info.task_settings.get('REWARD_AMOUNT_UL'))
adaptive_reward = compute_adaptive_reward_volume(
subject_weight_g=session_info.task_settings['SUBJECT_WEIGHT'],
reward_volume_ul=previous_reward_volume,
delivered_volume_ul=trials_data['reward_amount'].sum(),
ntrials=trials_data.shape[0])
# gets the trainng_phase by looking at the trials table
if 'training_phase' in trials_data:
training_phase = trials_data['training_phase'].values[-1]
else:
training_phase = DEFAULT_TRAINING_PHASE
return training_phase, adaptive_reward, session_info
# gets the adaptive gain
adaptive_gain = session_info.task_settings.get('ADAPTIVE_GAIN_VALUE', session_info.task_settings.get('AG_INIT_VALUE'))
if np.sum(trials_data['response_side'] != 0) > 200:
adaptive_gain = session_info.task_settings.get('STIM_GAIN')
return dict(training_phase=training_phase, adaptive_reward=adaptive_reward, adaptive_gain=adaptive_gain), session_info


def training_contrasts_probabilities(phase=1):
Expand Down
11 changes: 5 additions & 6 deletions iblrig/gui/ui_update.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-

# Form implementation generated from reading ui file 'iblrig/gui/ui_update.ui'
# Form implementation generated from reading ui file 'ui_update.ui'
#
# Created by: PyQt5 UI code generator 5.15.9
#
Expand All @@ -14,14 +14,14 @@
class Ui_update(object):
def setupUi(self, update):
update.setObjectName("update")
update.resize(353, 496)
update.resize(451, 496)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(update.sizePolicy().hasHeightForWidth())
update.setSizePolicy(sizePolicy)
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap(":/images/wizard.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
icon.addPixmap(QtGui.QPixmap("wizard.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
update.setWindowIcon(icon)
update.setModal(True)
self.horizontalLayout_2 = QtWidgets.QHBoxLayout(update)
Expand All @@ -32,7 +32,7 @@ def setupUi(self, update):
self.uiLabelLogo = QtWidgets.QLabel(update)
self.uiLabelLogo.setMaximumSize(QtCore.QSize(64, 64))
self.uiLabelLogo.setText("")
self.uiLabelLogo.setPixmap(QtGui.QPixmap("iblrig/gui\\wizard.png"))
self.uiLabelLogo.setPixmap(QtGui.QPixmap("wizard.png"))
self.uiLabelLogo.setScaledContents(True)
self.uiLabelLogo.setObjectName("uiLabelLogo")
self.uiLayoutLogo.addWidget(self.uiLabelLogo)
Expand All @@ -45,10 +45,8 @@ def setupUi(self, update):
self.uiLabelHeader.setObjectName("uiLabelHeader")
self.uiLayoutRight.addWidget(self.uiLabelHeader)
self.uiTextBrowserChanges = QtWidgets.QTextBrowser(update)
self.uiTextBrowserChanges.setStyleSheet("")
self.uiTextBrowserChanges.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
self.uiTextBrowserChanges.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.uiTextBrowserChanges.setDocumentTitle("")
self.uiTextBrowserChanges.setMarkdown("")
self.uiTextBrowserChanges.setTextInteractionFlags(QtCore.Qt.NoTextInteraction)
self.uiTextBrowserChanges.setObjectName("uiTextBrowserChanges")
Expand Down Expand Up @@ -96,6 +94,7 @@ def setupUi(self, update):
self.horizontalLayout_2.setStretch(1, 100)

self.retranslateUi(update)
self.uiPushButtonOK.released.connect(update.close) # type: ignore
QtCore.QMetaObject.connectSlotsByName(update)

def retranslateUi(self, update):
Expand Down
Loading