forked from specify/webportal-installer
-
Notifications
You must be signed in to change notification settings - Fork 1
/
AggregationDoc.txt
75 lines (58 loc) · 4.04 KB
/
AggregationDoc.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
How to Setup an Aggregated Portal
These instructions assume a common mapping is being used for all collections. It is possible to use different mappings and to adjust the CSV file in step d to make the data fit into the mapping used by the Solr instance.
1. Create a common mapping for the collections you want to be aggregated. The mapping must contain a field that identifies which collection a record belongs to. (See section e below.)
2. For one collection, use the Exporter App to build the mapping and export for the web portal, then use the web portal installer to set up Solr and the portal application. (See README.md)
3. For each additional collection:
a) Import the common mapping if necessary
b) Use the Exporter application to build the cache table for the mapping and Export for the web portal
c) Unzip the file exported in step b.
d) Import the PortalData.csv file into solr.
curl '[COREURL]/csv?commit=true&encapsulator="&escape=\&header=true' --data-binary @[PATH_TO_CSVFILE] -H 'Content-type:application/csv'
For example:
curl 'http://localhost:8983/solr/hollow/update/csv?commit=true&encapsulator="&escape=\&header=true' --data-binary @/home/omit/voucherwebsearchcsv/PortalFiles/PortalData.csv -H 'Content-type:application/csv'
Extremely large files (300k+ records) may need to be split to avoid out-of-memory errors in curl. This can be done with the split command, eg: "split -l120000 PortalData.csv PortalDataSplit". Headers can be added to split files with sed, eg: "sed -i '1i[HEADER-LINE]' PortalDataSplitab".
e) Configure the Portal Application
In the resources/config/settings.json file, set collCodeSolrFld to the name in the Solr schema of the field chosen to indicate a record's collection. You can find the field's Solr name in resources/config/fldmodel.json.
For each collection create an entry in the collections setting. Entries contain three fields. The code field contains the value of collCodeSolrFld that identifies the collection. The collName field is the equivalent of the collectionName field and is used to determine the address of attachments to records in the collection. It is usually the same as the collection name in the Specify database. The mapicon field indicates the map marker icon for objects in the collection. If it is not present the default marker is used.
Example:
"collCodeSolrFld": "Code",
"collections": [
{
"code": "KUH",
"collname": "KUHerpetology",
"mapicon": "resources/images/herp_marker-01.png"
},
{
"code": "KUO",
"collname": "Ornithology",
"mapicon": "resources/images/orni_marker-01.png"
},
{
"code": "KANU",
"collname": "KANU",
"mapicon": "resources/images/botany_marker-01.png"
},
{
"code": "KUI",
"collname": "KU Fish Voucher Collection",
"mapicon": "resources/images/ichthy_marker-01.png"
},
{
"code": "KUM",
"collname": "Mammalogy",
"mapicon": "resources/images/mammal_marker-01.png"
}
],
If collections is defined, the collectionName setting is ignored.
The doClusterFx setting may also be relevant for large aggregations. Setting it true enables clustering of map markers, which can improve performance and appearance of the map view. The following settings are available for clustering:
{name: 'doClusterFx', type: 'boolean', defaultValue: false},
{name: 'clusterMinPoints', type: 'int', defaultValue: 3000},
{name: 'clusterGridSize', type: 'int'},
{name: 'clusterMaxZoom', type: 'int'},
{name: 'clusterZoomOnClick', type: 'boolean'},
{name: 'clusterImagePath', type: 'string', defaultValue: 'resources/images/m'},
{name: 'clusterImageExtension', type: 'string'},
{name: 'clusterAverageCenter', type: 'boolean'},
{name: 'clusterMinimumClusterSize', type: 'int'},
{name: 'clusterStyles', type: 'json'}
See https://github.com/gmaps-marker-clusterer/gmaps-marker-clusterer for more information.