Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLDR-17566 Converting Updating Codes P2 #4006

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions docs/site/development/updating-codes/update-validity-xml.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title: Update Validity XML
---

# Update Validity XML

1. Create the archive ([Creating the Archive](https://cldr.unicode.org/development/creating-the-archive)) with at least the last release (if you don't have it already)
2. Run GenerateValidityXML.java
3. This updates files in cldr/common/validity/. (If you set \-DSHOW\_FILES, you'll see this on the console.)
1. New files should not be generated. If there are any, something has gone wrong, so raise this as an issue on cldr\-dev. **Note:** cldr/common/validity/currency.xml contains a comment line \- *\<!\-\- Deprecated values are those that are not legal tender in some country after 2021\.* The value of year is generated by the tool and this might be detected as diff. In this case, edit currency.xml to update the year in comment to the current year, and run the tool again.
1. The units file is hand\-curated. It is kept in sync with the units supported in root/en.xml. The attribute values test ensures that it is a superset.
4. git diff for a sanity check \- may be no change.
5. Subdivisions are special.
1. At the end you may see a "Codes added this release:".
2. Compare the country codes to those from the "deprecated" list.
3. If any are the same, you'll have to compare the deprecated codes to the new codes, to see if any deprecated code was changed into a new code.
4. If so, then add necessary lines in supplemental data (below \<!\-\- end of data generated with GenerateSubdivisions \-\-\>) of the form:
- \<subdivisionAlias type\="\<DEP\-CODE\>" replacement\="\<NEW\-CODE\>" reason\="deprecated"/\>
5. Run the following (you must have all the archived versions loaded, back to cldr\-28\.0!)
1. TestValidity \-e9
6. If they are ok, replace and checkin

![Unicode copyright](https://www.unicode.org/img/hb_notice.gif)
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
---
title: Updating Population, GDP, Literacy
---

# Updating Population, GDP, Literacy

**Updated 2021\-02\-10 by Yoshito**

Instructions are based on Chrome browser.

## Load the World DataBank

**The World DataBank is at (http://databank.worldbank.org/data/views/variableselection/selectvariables.aspx?source=world-development-indicators). Unfortunately, they keep changing the link. If the page has been moved, try to get to it by doing the following. Each of the links are what currently works, but that again may change.**

1. Go to http://worldbank.org
2. Click "View More Data" in the Data section (http://data.worldbank.org/)
3. Click "Data Catalog" (http://datacatalog.worldbank.org/)
4. Search "World Development Indicators" (http://data.worldbank.org/data-catalog/world-development-indicators)
5. In "Data \& Resources" tab, click on the blue "Databank" link. It should open a new Window \- https://databank.worldbank.org/reports.aspx?source\=world\-development\-indicators

Once you are there, generate a file by using the following steps. There are 3 collapsible sections, "Country", "Series", and "Time"

- Countries
- Expand the "Country" section, click the "Countries" tab, and then click the "Select All" button on the left. You do NOT want the aggregates here, just the countries. There were 217 countries on the list when these instructions were written; if substantially more than that, you may have mistakenly included aggregates.
- Series
- Expand the "Series" section.
- Select "Population, total"
- Select "GNI, PPP (current international $)"
- Time
- Select all years starting at 2000 up to the latest available year. The latest as of this writing was "2021". Be careful here, because sometimes it will list a year as being available, but there will be no real data there, which messes up our tooling.
- The tooling will automatically handle new years.
- Click the "Download Options" link in the upper right.
- A small "Download options" box will appear.
- Select "CSV"
- Instruct your browser to the save the file.
- You will receive a ZIP file named "**Data\_Extract\_From\_World\_Development\_Indicators.zip**".
- Unpack this zip file. It will contain two files.
- (From a unix command line, you can unpack it with
- "unzip \-j \-a \-a **Data\_Extract\_From\_World\_Development\_Indicators.zip"**
- to junk subdirectories and force the file to LF line endings.)
- The larger file (126kb as of 2021\-02\-10\) contains the actual data we are interested in. The file name should be something like f17e18f5\-e161\-45a9\-b357\-cba778a279fd\_Data.csv
- The smaller file is just a field definitions file that we don't care about.
- Verify that the data file is of the form:
- Country Name,Country Code,Series Name,Series Code,2000 \[YR2000],2001 \[YR2001],2004 \[YR2004],...
- Afghanistan,AFG,"Population, total",SP.POP.TOTL,19701940,20531160,23499850,24399948,25183615,...
- Afghanistan,AFG,"GNI, PPP (current international $)",NY.GNP.MKTP.PP.CD,..,..,22134851020\.6294,25406550418\.3726,27761871367\.4836,32316545463\.8146,...
- Albania,ALB,"Population, total",SP.POP.TOTL,3089027,3060173,3026939,3011487,2992547,2970017,...
- ...
- Rename it to **world\_bank\_data.csv** and and save in {**cldr}/tools/cldr\-code/src/main/resources/org/****unicode****/cldr/util/data/external/**
- Diff the old version vs. the current.
- If the format changes, you'll have to modify WBLine in AddPopulationData.java to have the right order and contents.

## Load UN Literacy Data

1. Goto http://unstats.un.org/unsd/demographic/products/socind/default.htm
2. Click on "Education"
3. Click in "Table 4a \- Literacy"
4. Download data \- save as temporary file
5. Open in Excel, OpenOffice, or Numbers \- save as cldr/tools/java/org/unicode/cldr/util/data/external/un\_literacy.csv (Windows Comma Separated)
1. If it has multiple sheets, you want the one that says "Data", and looks like:
6. Table 4a. Literacy
7. Last update: December 2012
8. Country or area Year Adult (15\+) literacy rate Youth (15\-24\) literacy rate
9. Total Men Women Total Men Women
10. Albania 2008 96 97 95 99 99 99
11. Diff the old version vs. the current.
12. If the format changes, you'll have to modify the loadUnLiteracy() method in **org/unicode/cldr/tool/AddPopulationData.java**
13. Note that the content does not seem to have changed since 2012, but the page says "*Please note this page is currently under revision*."
1. If there is no change to the data (still no change 10 years later), there is no reason to commit a new version of the file.
2. See also [CLDR\-15923](https://unicode-org.atlassian.net/browse/CLDR-15923)

## Load CIA Factbook

**Note:** Pages in original instruction were moved to below. These pages no longer provide text version compatible with files in CLDR. ([CLDR\-14470](https://unicode-org.atlassian.net/browse/CLDR-14470))

- Population: https://www.cia.gov/the-world-factbook/field/population
- Real GDP (purchasing power parity): https://www.cia.gov/the-world-factbook/field/real-gdp-purchasing-power-parity
1. All files are saved in **cldr/tools/java/org/unicode/cldr/util/data/external/**
2. Goto: https://www.cia.gov/library/publications/the-world-factbook/index.html
3. Goto the "References" tab, and click on "Guide to Country Comparisons"
4. Expand "People and Society" and click on "Population" \-
1. There's a "download" icon in the right side of the header. Right click it, Save Link As... call it
2. **factbook\_population.txt**
3. **You may need to delete header lines. The first line should begin with "1 China … " or similar.**
5. Back up a page, then Expand "Economy" and click on "GDP (purchasing power parity)"
1. Right Click on DownloadData, Save Link As... call it
2. **factbook\_gdp\_ppp.txt**
3. **You may need to delete header lines. The first line should begin with "1 China … " or similar.**
6. Literacy \- **No longer works, so we need to revise program \- They are still publishing updates to the data at this page, we just need to write some code to put the data into a form we can use, see** [**CLDR\-9756 (comment 4\)**](https://unicode-org.atlassian.net/browse/CLDR-9756?focusedCommentId=118608)
1. ~~https://www.cia.gov/library/publications/the-world-factbook/fields/2103.html~~ maybe https://www.cia.gov/library/publications/the-world-factbook/fields/370.html ?
2. ~~Right Click on "Download Data", Save Link As... Call it~~
3. ~~**factbook\_literacy.txt**~~
7. Diff the old version vs. the current.
8. If the format changes, you'll have to modify the loadFactbookLiteracy()) method in **org/unicode/cldr/tool/AddPopulationData.java**

## Convert the data

1. If you saw any different country names above, you'll need to edit external/alternate\_country\_names.txt to add them.
1. For example, we needed to add Czechia in 2016\.
2. Q: How would I know?
1. If two\-letter non\-countries are added, then you'll need to adjust StandardCodes.isCountry.
3. Q: How would I know?
1. Run "AddPopulationData *\-DADD\_POP*\=**true"** and look for errors.
4. **java \-jar \-DADD\_POP\=true \-DCLDR\_DIR\=${HOME}/src/cldr cldr.jar org.unicode.cldr.tool.AddPopulationData**
5. Once everything looks ok, check everything in to git.
6. Once done, then run the ConvertLanguageData tool as on [Update Language Script Info](https://cldr.unicode.org/development/updating-codes/update-language-script-info)

![Unicode copyright](https://www.unicode.org/img/hb_notice.gif)
85 changes: 85 additions & 0 deletions docs/site/development/updating-codes/updating-script-metadata.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
title: Updating Script Metadata
---

# Updating Script Metadata

### New Unicode scripts

We should work on script metadata early for a Unicode version, so that it is available for tools (such as Mark's "UCA" tools).

- Unicode 9/CLDR 29: New scripts in CLDR but not yet in ICU caused trouble.
- Unicode 10: Working on a pre\-CLDR\-31 branch, plan to merge into CLDR trunk after CLDR 31 is done.
- Should the script metadata code live in the Unicode Tools, so that we don't need a CLDR branch during early Unicode next\-version work?

If the new Unicode version's PropertyValueAliases.txt does not have lines for Block and Script properties yet, then create a preliminary version. Diff the Blocks.txt file and UnicodeData.txt to find new scripts. Get the script codes from <http://www.unicode.org/iso15924/codelists.html> . Follow existing patterns for block and script names, especially for abbreviations. Do not add abbreviations (which differ from the long forms) unless there is a well\-established pattern in the existing data.

Aside from instructions below for all script metadata changes, new script codes need English names (common/main/en.xml) and need to be added to common/supplemental/coverageLevels, under key %script100, so that the new script names will show up in the survey tool. For example, see the [changes for new Unicode 8 scripts](https://unicode-org.atlassian.net/browse/CLDR-8109).

Can we add new scripts in CLDR *trunk* before or only after adding them to CLDR's copy of ICU4J? We did add new Unicode 9 scripts in CLDR 29 before adding them to ICU4J. The CLDR unit tests do not fail any more for scripts that are newer than the Unicode version in CLDR's copy of ICU.

### Sample characters

We need sample characters for the "UCA" tools for generating FractionalUCA.txt.

Look for patterns of what kinds of characters we have picked for other scripts, for example the script's letter "KA". We basically want a character where people say "that looks Greek", and the same shape should not be used in multiple scripts. So for Latin we use "L", not "A". We usually prefer consonants, if applicable, but it is more important that a character look unique across scripts. It does want to be a *letter*, and if possible should not be a combining mark. It would be nice if the letters were commonly used in the majority language, if there are multiple. Compare with the [charts for existing scripts](http://www.unicode.org/charts/), especially related ones.

### Editing the spreadsheet

Google Spreadsheet: [Script Metadata](https://docs.google.com/spreadsheets/d/1Y90M0Ie3MUJ6UVCRDOypOtijlMDLNNyyLk36T6iMu0o/edit#gid=0)

Use and copy cell formulas rather than duplicating contents, if possible. Look for which cells have formulas in existing data, especially for Unicode 1\.1 and 7\.0 scripts.

For example,

- Script names should only be entered on the LikelyLanguage sheet. Other sheets should use a formula to map from the script code.
- On the Samples sheet, use a formula to map from the code point to the actual character. This is especially important for avoiding mistakes since almost no one will have font support for the new scripts, which means that most people will see "Tofu" glyphs for the sample characters.

### Script Metadata properties file
1. Go to the spreadsheet [Script Metadata](https://docs.google.com/spreadsheets/d/1Y90M0Ie3MUJ6UVCRDOypOtijlMDLNNyyLk36T6iMu0o/edit#gid=0)
1. File\>Download as\>Comma Separated Values
2. Location/Name \= {CLDR}/tools/cldr\-code/src/main/resources/org/unicode/cldr/util/data/Script\_Metadata.csv
3. Refresh files (eclipse), then compare with previous version for sanity check. If there are no new scripts for target Unicode version of CLDR release you're working on, then skip the rest of steps below. For example, script "Toto" is ignore for CLDR 39 because target Unicode release of CLDR 39 is Unicode 13 and "Toto" will be added in Unicode 14\.
2. **Note: VM arguments**
1. Each tool (and test) needs   \-DCLDR\_DIR\=/usr/local/google/home/mscherer/cldr/uni/src   (or wherever your repo root is)
2. It is easiest to set this once in the global Preferences, rather than in the Run Configuration for each tool.
3. Most of these tools also need   \-DSCRIPT\_UNICODE\_VERSION\=14   (set to the upcoming Unicode version), but it is easier to edit the ScriptMetadata.java line that sets the UNICODE\_VERSION variable.
4. Run {cldr}/tools/cldr\-code/src/test/java/org/unicode/cldr/unittest/TestScriptMetadata.java
5. A common error is if some of the data from the spreadsheet is missing, or has incorrect values.
3. Run GenerateScriptMetadata, which will produce a modified [common/properties/scriptMetadata.txt](https://github.com/unicode-org/cldr/blob/main/common/properties/scriptMetadata.txt) file.
1. If this ignores the new scripts: Check the \-DSCRIPT\_UNICODE\_VERSION or the ScriptMetadata.java UNICODE\_VERSION.
2. Add the English script names (from the script metadata spreadsheet) to common/main/en.xml.
3. Add the French script names from [ISO 15924](https://www.unicode.org/iso15924/iso15924-codes.html) to common/main/fr.xml, but mark them as draft\="provisional".
4. Add the script codes to common/supplemental/coverageLevels.xml (under key %script100\) so that the new script names will show up in the CLDR survey tool.
1. See [\#8109\#comment:4](https://unicode-org.atlassian.net/browse/CLDR-8109#comment:4) [r11491](https://github.com/unicode-org/cldr/commit/1d6f2a4db84cc449983c7a01e5a2679dc1827598)
2. See changes for Unicode 10: <http://unicode.org/cldr/trac/review/9882>
3. See changes for Unicode 12: [CLDR\-11478](https://unicode-org.atlassian.net/browse/CLDR-11478) [commit/647ce01](https://github.com/unicode-org/cldr/commit/be3000629ca3af2ae77de6304480abefe647ce01)
5. Maybe add the script codes to TestCoverageLevel.java variable script100\.
1. Starting with [cldr/pull/1296](https://github.com/unicode-org/cldr/pull/1296) we should not need to list a script here explicitly unless it is Identifier\_Type\=Recommended.
6. Remove new script codes from $scriptNonUnicode in common/supplemental/attributeValueValidity.xml if needed
7. For the following step to work as expected, the CLDR copy of the IANA BCP 47 language subtag registry must be updated (at least with the new script codes).
1. Copy the latest version of https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry to {CLDR}/tools/cldr\-code/src/main/resources/org/unicode/cldr/util/data/language\-subtag\-registry
2. Consider copying only the new script subtags (and making a note near the top of the CLDR file, or lines like "Comments: Unicode 14 script manually added 2021\-06\-01") to avoid having to update other parts of CLDR.
8. Run GenerateValidityXML.java like this:
1. See [Update Validity XML](https://cldr.unicode.org/development/updating-codes/update-validity-xml)
2. This needs the previous version of CLDR in a sibling folder.
1. see [Creating the Archive](https://cldr.unicode.org/development/creating-the-archive) for details on running the CheckoutArchive tool
3. Now run GenerateValidityXML.java
4. If this crashes with a NullPointerException trying to create a Validity object, check that ToolConstants.LAST\_RELEASE\_VERSION is set to the actual last release.
1. Currently, the CHART\_VERSION must be a simple integer, no ".1" suffix.
9. At least script.xml should show the new scripts. The generator overwrites the source data file; use ```git diff``` or ```git difftool``` to make sure the new scripts have been added.
10. Run GenerateMaximalLocales, [as described on the likelysubtags page](https://cldr.unicode.org/development/updating-codes/likelysubtags-and-default-content), which generates another two files.
11. Compare the latest git master files with the generated ones:  meld  common/supplemental  ../Generated/cldr/supplemental
1. Copy likelySubtags.xml and supplementalMetadata.xml to the latest git master if they have changes.
12. Compare generated files with previous versions for sanity check.
13. Run the CLDR unit tests.
1. Project cldr\-core: Debug As \> Maven test
14. These tests have sometimes failed:
1. LikelySubtagsTest
2. TestInheritance
3. They may need special adjustments, for example in GenerateMaximalLocales.java adding an extra entry to its MAX\_ADDITIONS or LANGUAGE\_OVERRIDES.
4. Check in the updated files.

Problems are typically because a non\-standard name is used for a territory name. That can be fixed and the process rerun.

![Unicode copyright](https://www.unicode.org/img/hb_notice.gif)
Loading
Loading