-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathTODO
212 lines (176 loc) · 9.47 KB
/
TODO
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
1. effective_radius needs to return something like:
size * (0.5 * diameter)
where 'size' is the representative particle size.
2. Need to create code to find a best (initial) guess of (sigma*v)_max. This
guess should be based upon some multiple of the sqrt of temperature.
Assuming that a particular cell of particles in thermal, we should predict
the maximum relative velocity in the ensemble and then perform a quick
search through the cross-section data to set the (sigma*v)_max value.
This value should be allowed to be upgraded if higher values are met
during runtime. It should also be allowed to be downgraded if we can
detect that the v_max (used to search for (sigma*v)_max) is significantly
higher than the similar sqrt-of-temperature factor for the current
time-step.
It may be necessary to force all functional cross-section sources
(currently only VHS) to also provide a derivative of the cross-section.
This may help in searching for maximum values of the cross-section in
particular regions.
It might even be better if the functional forms do the search themselves
since they may actually be smart enough to know that they are
monotonically increasing/decreasing or whatever.
I think the best approach would be to require each CrossSection type to
define a virtual function 'find_max_sigma_v_rel(v_rel_max)'. This way,
functional forms can perform their smartest method for satisfying the
function and discrete forms can easily just search through their data
within that velocity range to find the maximum value.
DONE
3. Create an 'averaged cross-section' source when no cross-section data is
found for a particular interaction. This source would search for and use
the single-species cross-sections for each of the two dissimilar inputs
to create an averaged cross section. This average would not represent the
average cross-section, but rather the average diameter.
If the two single-species cross-section data are in the form of VHSData,
then the resulting average should be a VHS source where each of the
parameters is averaged correctly. If the two sources are NOT in the form
of VHSData, it may be necessary to pick a particular grid that spans the
energy range of both data-sets and a discretization that is no smaller
than the discretization of the data-sets. The resulting grid (in velocity
space) would not need to be uniform (as it currently is not required to
be), so the grid step size could be a local choice.
4. Test interaction::Set::calculateOutPath(...)
DONE
5. Test interaction::CrossSection::find_max_sigma_v_rel(...)
DONE
6. Create interaction::PreComputedSet
7. Use XML::Xinclude to include different particledb.xml type files into one
runtime accessible set. This allows a user to easily maintain their own
set of data, even across upgrades of the software. This also allows us
to separate different sets of data depending on their proprietary and or
sensitive nature.
DONE
NOTE: the convention will be for each included file to uniquely define
their own sub-tree, such as
"/ParticleDB/Olson" (for the 'Olson' set of data)
"/ParticleDB/standard" (for the 'standard' set of data--distributed by the
package; i.e. the default set of data)
8. Investigate the feasibility of allowing particle "<alias>" entries to be
given to particles so that users can refer to the same particle property
set by different names.
9. If an equation specified in the "allowed_equations" set cannot be not
used, then issue a warning.
NOT SURE THIS APPLIES WITH THE NEW FILTER MECHANISM.
10. Create a generalized filter approach to establishing the set of all
usable equations for a particular input pair.
This filter should have things like:
allow_equation("<eq>")
reject_equation("<eq>")
allow_interaction_type(["elastic", "ionization", "excitation", "attachment", ...])
reject_interaction_type(["elastic", "ionization", "excitation", "attachment", ...])
This filter should provide an order of precedence for equations that may
be multiply defined. For instance, if the user has defined an equation in
the user's own xml file (which is included by the particledb.xml XInclude
commands), then the user should be able to give their version of the
equation precedence over any other definition that may match the same
equation "fingerprint".
It seems like the best way to do this is to have the above allow_,reject_
functions take an optional argument that would specify the portion of the
xml tree which should take precedence in the filter. For instance, the
user should be able to specify:
allow_equation("<eq>", "Olson")
to give precedence to the equation "<eq>" as defined in the Olson
sub-tree, e.g. in
"/ParticleDB/Olson/Interactions/Interaction[string(Eq)='<eq>']"
DONE
11. Add a new section in the database for a predefined set of data that might
be useful to import together.
For example, as user might be able to just say:
db.add_set("air")
and all particles and equations that are confirmed to be pertinent for
simulating "air" would be added to the runtime database.
<Set name="air">
<particles>
<P>e^-</P>
<P>O2</P>
<P>O2(rot)</P>
<P>N2</P>
</particles>
<equation-filter>
<Or>
<Elastic/>
<And>
<EqIO dir="In"><P>e^-</P><P>02</P></EqIO>
<EqIO dir="Out"><P>e^-</P><P>02(rot)</P></EqIO>
</And>
</Or>
</equation-filter>
</Set>
12. Add secondary and tertiary sorting rules:
a. sort by charge (positive, then neutral, negative)(???)
b. sort alphabetically
This needs to be done so that there is not an undefined sort order for
things that have the same mass.
DONE (alphabetically at least, charge not yet)
13. Add three-body interaction table and facilities to manage such.
14. (Todo items that used to be in ParticleDB.cpp)
a.
Create a consistent and simple interface to different types of
interactions which each have certain cross sections. These interactions
can include multiple types of state-changing (inelastic) collisions,
ionizing collisions, and so forth. These interactions would include:
1. cross sections, probabilities, etc.
2. Path information with regards to inelastic collisions. The path
could either be internal quantum state changes, association, three-body
recombination (2 particles combining with a background electron gas),
dissociation, and so on. The path implementation might result in
particles changing type as they collide--with a separate type given
for each particle with a different charge, isotope, internal quantum
number, and so on. It would be best for the database to organize
these related types such that the paths are very easy to traverse.
Easy to traverse would be something like a pointer arithmatic, or
perhaps traversal of a dual-linked list of items.
3. Pre-requisites for the interaction to be allowed:
a. Threshold energy
b. State dependency
c. ???
Essentially, these different interactions would represent different
collision models.
b.
Create a consistent and simple interface to state-type information to
particles. State type of information would be something like:
1. Internal quantum state
2. charge
15. Change the CrossSection::cross_section function into the operator()
function.
DONE
16. Make the particle property comparator be a policy defined class--defaulting
to chimp::property::Comparator.
17. Make a new cross-section model that pieces together models of arbitrary
types using arbitrary ranges for each sub-model.
This would make it easy to piece together empirical data with parameterized
models or perhaps two like-minded models with slightly different parameters
that are valid for different ranges of velocity.
18. Make a new cross-section model that automatically creates a cross-species
cross section from single-species cross section data, provided that the
cross-species data does not already exist in the database.
19. Add ability to perform variation on properties based on measured,
estimated... uncertainty.
20. Create a way for the database to be extended (sections to be added) at
runtime, without the need to edit the root and/or distributed xml files.
DONE
21. Do something like a least squares fit for picking the right factors for
doing the extrapolation for the DATA-model.
DONE (or at least improved)
22. Issue a warning when extrapolation is used--say which cross section is
extrapolating. Only issue the warning once!
DONE
Perhaps it could count the number of occurances for performing an
extrapolation, then issue warnings at some later time.
DONE (almost, except for a handy way of issuing cummulative warnings when
requested by the user)
23. Allow the DATA model to be modified such that B-spline interpolation be done
for the cross section data returned. Still would probably default to
linear interpolation.
24. Define a new cross section type: constant
This will be useful for billiard-ball test problems and for some elastic
cross sections that currently exist in the database.
DONE