ppcpy.io#
ppcpy.io.loadConfigs#
- ppcpy.io.loadConfigs.loadPicassoConfig(picasso_config_file, picasso_default_config_file)[source]#
load the general Picasso config file
- Parameters:
- picasso_config_filestr or path
the specific config file
- picasso_default_config_filstr or path
the default (template) file
- Returns:
- picasso_config_dict
- ppcpy.io.loadConfigs.readPollyNetConfigLinkTable(polly_config_table_file, timestamp, device)[source]#
- ppcpy.io.loadConfigs.fix_indexing(config_dict, keys=['first_range_gate_indx', 'bgCorRangeIndx', 'bgCorRangeIndxLow', 'bgCorRangeIndxHigh', 'LCMeanMinIndx', 'LCMeanMaxIndx'])[source]#
- ppcpy.io.loadConfigs.getPollyConfigfromArray(polly_config_array, picasso_config_dict)[source]#
function to load the config for the time identified
aim is to declutter the runscript
- Parameters:
- polly_config_arraypandas dataframe
selected line form the links.xlsx
- picasso_config_dictdict
general picasso config
- Returns:
- polly_config_dictdict
ppcpy.io.readMeteo#
- class ppcpy.io.readMeteo.Meteo(meteorDataSource, meteo_folder, meteo_file)[source]#
% LOADMETEOR read meteorological data. % % USAGE: % [temp, pres, relh, wins, wind, meteorAttri] = loadMeteor(mTime, asl) % % INPUTS: % mTime: array % query time. % asl: array % height above sea level. (m) % % KEYWORDS: % meteorDataSource: str % meteorological data type. % e.g., ‘gdas1’(default), ‘standard_atmosphere’, ‘websonde’, ‘radiosonde’, ‘nc_cloudnet’ % gdas1Site: str % the GDAS1 site for the current campaign. % meteo_folder: str % the main folder of the GDAS1 profiles (or the cloudnet profiles). % radiosondeSitenum: integer % site number, which can be found in % doc/radiosonde-station-list.txt. % radiosondeFolder: str % the folder of the sonding files. % radiosondeType: integer % file type of the radiosonde file. % 1: radiosonde file for MOSAiC (default) % 2: radiosonde file for MUA % flagReadLess: logical % flag to determine whether access meteorological data by certain time % interval. (default: false) % method: char % Interpolation method. (default: ‘nearest’) % isUseLatestGDAS: logical % whether to search the latest available GDAS profile (default: false). % % OUTPUTS: % temp: matrix (time * height) % temperature for each range bin. [°C] % pres: matrix (time * height) % pressure for each range bin. [hPa] % relh: matrix (time * height) % relative humidity for each range bin. [%] % wins: matrix (time * height) % wind speed. (m/s) % meteorAttri: struct % dataSource: cell % The data source used in the data processing for each cloud-free group. % URL: cell % The data file info for each cloud-free group. % datetime: array % datetime label for the meteorlogical data. % % HISTORY: % - 2021-05-22: first edition by Zhenping % % .. Authors: - zhenping@tropos.de
- class ppcpy.io.readMeteo.MeteoNcCloudnet(basepath, filepattern)[source]#
TODO for now only one filename define preferred model
- load(time, height_grid)[source]#
load the data
not quite sure on the interface yet
` met.load(data_cube.retrievals_highres['time'][0]) met.load(datetime.datetime.timestamp(datetime.datetime.strptime(data_cube.date, '%Y%m%d'))) `- Recipie:
load
select variables?
rename?
regrid from (time, level) to (time, lidar heights)
clarify the above ground above sea level issues
ppcpy.io.readPollyRawData#
ppcpy.io.write2nc#
- ppcpy.io.write2nc.write_channelwise_2_nc_file(data_cube, root_dir=PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls=[])[source]#
- ppcpy.io.write2nc.write2nc_file(data_cube, root_dir=PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls=[])[source]#
- ppcpy.io.write2nc.write_profile2nc_file(data_cube, root_dir: str = PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls: list = [], collect_debug: bool = False)[source]#
Saving profile data to NetCDF4 files
- Parameters:
- data_cubeobject
Main PicassoProc object
- root_dirstr
- prod_lslist
List of product names
- .. TODO::
Missing comment in variable attributes. Not all retrievals / information needed for the profiles are in data_cube.retrivals_highres… write docstring
ppcpy.io.sql_interaction#
- ppcpy.io.sql_interaction.string_to_ts(s)[source]#
string of format %Y-%m-%d %H:%M:%S to timestamp (timezone-aware)
- ppcpy.io.sql_interaction.get_from_sql_db(db_path: str, table_name: str, ts_interval: list[str]) dict[source]#
read lidar calibration constant or depol calibration from database
- Parameters:
- db_pathstr
name of the specific sqlite db file.
- table_namestr
default ‘lidar_calibration_constant’
- ts_intervalstr
the date or timestamp to look for
- Returns:
- dict
in calibration storage format
- ppcpy.io.sql_interaction.prepare_for_sql_db_writing(data_cube, parameter: str, method: str) list[tuple][source]#
Collect all necessary variable and save it to a list of tuples for inserting into a SQLite table.
- Parameters:
- data_cubeobject
- parameter :str
LC or DC
- methodstr
klett or raman
- Returns:
- rows_to_insertlist of tuples
- ppcpy.io.sql_interaction.setup_empty(db_path: str, table_name: str, column_names: list[str], data_types: list[str], unique: str = '')[source]#
Create/Initialise an empty database.
- Parameters:
- db_pathstr
Path to the SQLite database file.
- table_namestr
Name of the target table.
- column_nameslist of str
List of column names to insert values into (e.g. [‘col1’, ‘col2’]).
- data_typeslist of str
List of SQLite data types for each respective columns (e.g. [‘text’, ‘real’])
- ppcpy.io.sql_interaction.write_rows_to_sql_db(db_path: str, table_name: str, column_names: list[str], rows_to_insert: list[str])[source]#
Insert multiple rows into a SQLite table.
- Parameters:
- db_pathstr
Path to the SQLite database file.
- table_namestr
Name of the target table.
- column_nameslist of str
List of column names to insert values into (e.g. [‘col1’, ‘col2’]).
- rows_to_insertlist of tuples
Data to insert, e.g. [(‘a’, ‘b’), (‘c’, ‘d’)].
Notes
The IGNORE syntax somehow did not work. With the UNIQUE colums defined and INSERT OR REPLACE at least the new values are updated. Though they are given a new ID.