ppcpy.io#

ppcpy.io.loadConfigs#

ppcpy.io.loadConfigs.loadPicassoConfig(picasso_config_file, picasso_default_config_file)[source]#

load the general Picasso config file

Parameters:
picasso_config_filestr or path

the specific config file

picasso_default_config_filstr or path

the default (template) file

Returns:
picasso_config_dict
ppcpy.io.loadConfigs.readPollyNetConfigLinkTable(polly_config_table_file, timestamp, device)[source]#
ppcpy.io.loadConfigs.fix_indexing(config_dict, keys=['first_range_gate_indx', 'bgCorRangeIndx', 'bgCorRangeIndxLow', 'bgCorRangeIndxHigh', 'LCMeanMinIndx', 'LCMeanMaxIndx'])[source]#
ppcpy.io.loadConfigs.getPollyConfigfromArray(polly_config_array, picasso_config_dict)[source]#

function to load the config for the time identified

aim is to declutter the runscript

Parameters:
polly_config_arraypandas dataframe

selected line form the links.xlsx

picasso_config_dictdict

general picasso config

Returns:
polly_config_dictdict
ppcpy.io.loadConfigs.loadPollyConfig(polly_config_file, polly_default_config_file)[source]#
ppcpy.io.loadConfigs.checkPollyConfigDict(polly_config_dict: dict) dict[source]#

Check and potentially modify polly config dict

Parameters: - polly_config_dict (dict): polly config dict to be checked Output: - new_polly_config_dict (dict): checked (and modified) polly config dict

ppcpy.io.readMeteo#

class ppcpy.io.readMeteo.Meteo(meteorDataSource, meteo_folder, meteo_file)[source]#

% LOADMETEOR read meteorological data. % % USAGE: % [temp, pres, relh, wins, wind, meteorAttri] = loadMeteor(mTime, asl) % % INPUTS: % mTime: array % query time. % asl: array % height above sea level. (m) % % KEYWORDS: % meteorDataSource: str % meteorological data type. % e.g., ‘gdas1’(default), ‘standard_atmosphere’, ‘websonde’, ‘radiosonde’, ‘nc_cloudnet’ % gdas1Site: str % the GDAS1 site for the current campaign. % meteo_folder: str % the main folder of the GDAS1 profiles (or the cloudnet profiles). % radiosondeSitenum: integer % site number, which can be found in % doc/radiosonde-station-list.txt. % radiosondeFolder: str % the folder of the sonding files. % radiosondeType: integer % file type of the radiosonde file. % 1: radiosonde file for MOSAiC (default) % 2: radiosonde file for MUA % flagReadLess: logical % flag to determine whether access meteorological data by certain time % interval. (default: false) % method: char % Interpolation method. (default: ‘nearest’) % isUseLatestGDAS: logical % whether to search the latest available GDAS profile (default: false). % % OUTPUTS: % temp: matrix (time * height) % temperature for each range bin. [°C] % pres: matrix (time * height) % pressure for each range bin. [hPa] % relh: matrix (time * height) % relative humidity for each range bin. [%] % wins: matrix (time * height) % wind speed. (m/s) % meteorAttri: struct % dataSource: cell % The data source used in the data processing for each cloud-free group. % URL: cell % The data file info for each cloud-free group. % datetime: array % datetime label for the meteorlogical data. % % HISTORY: % - 2021-05-22: first edition by Zhenping % % .. Authors: - zhenping@tropos.de

load(times, heights)[source]#

load the data and resample to 15 minute intervals

get_mean_profiles(time_slice)[source]#

get the mean meteorological profiles

class ppcpy.io.readMeteo.MeteoNcCloudnet(basepath, filepattern)[source]#

TODO for now only one filename define preferred model

find_path_for_time(time)[source]#

find the files fo a given time

load(time, height_grid)[source]#

load the data

not quite sure on the interface yet ` met.load(data_cube.retrievals_highres['time'][0]) met.load(datetime.datetime.timestamp(datetime.datetime.strptime(data_cube.date, '%Y%m%d'))) `

Recipie:
  • load

  • select variables?

  • rename?

  • regrid from (time, level) to (time, lidar heights)

clarify the above ground above sea level issues

ppcpy.io.readPollyRawData#

ppcpy.io.readPollyRawData.readPollyRawData(filename: str) dict[source]#

read the Polly raw file

Parameters:
filenamestr
Returns:
data_dictdict

ppcpy.io.write2nc#

ppcpy.io.write2nc.get_git_info(path='.')[source]#
ppcpy.io.write2nc.adding_fixed_vars(data_cube, json_nc_mapping_dict)[source]#
ppcpy.io.write2nc.adding_global_attr(data_cube, json_nc_mapping_dict)[source]#
ppcpy.io.write2nc.write_channelwise_2_nc_file(data_cube, root_dir=PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls=[])[source]#
ppcpy.io.write2nc.write2nc_file(data_cube, root_dir=PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls=[])[source]#
ppcpy.io.write2nc.write_profile2nc_file(data_cube, root_dir: str = PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls: list = [], collect_debug: bool = False)[source]#

Saving profile data to NetCDF4 files

Parameters:
data_cubeobject

Main PicassoProc object

root_dirstr
prod_lslist

List of product names

.. TODO::

Missing comment in variable attributes. Not all retrievals / information needed for the profiles are in data_cube.retrivals_highres… write docstring

ppcpy.io.write2nc.adding_mol_profiles(data_cube, json_nc_mapping_dict: dict, cldFreeGrp: int) dict[source]#

Temporarily quick fix for adding molecular profiles as variables to the NetCDF profile outputs

ppcpy.io.sql_interaction#

ppcpy.io.sql_interaction.get_LC_from_sql_db(db_path: str, table_name: str, wavelength: int | str, method: str, telescope: str, timestamp: str) dict[source]#

Accesses the sqlite db table and returns LC for all cloud-free-regions )profiles)

Parameters: - db_path (str): name of the specific sqlite db file. - table_name (str): default ‘lidar_calibration_constant’ - wavelength (int or str): the wavelength - method (str): Klett or Raman - telescope (str): NR or FR - timestamp (str): the date or timestamp to look for Output: - LC (dict): containing all profiles as list

ppcpy.io.sql_interaction.prepare_for_sql_db_writing(data_cube, parameter: str, method: str) list[tuple][source]#

Collect all necessary variable and save it to a list of tuples for inserting into a SQLite table.

Parameters: - data_cube (object) - parameter (str): LC or DC - method (str): klett or raman Output: - rows_to_insert (list of tuples)

ppcpy.io.sql_interaction.setup_empty(db_path: str, table_name: str, column_names: list[str], data_types: list[str])[source]#

Create/Initialise an empty database.

Parameters: - db_path (str): Path to the SQLite database file. - table_name (str): Name of the target table. - column_names (list of str): List of column names to insert values into (e.g. [‘col1’, ‘col2’]). - data_types (list of str): List of SQLite data types for each respective columns (e.g. [‘text’, ‘real’])

ppcpy.io.sql_interaction.write_rows_to_sql_db(db_path: str, table_name: str, column_names: list[str], rows_to_insert: list[str])[source]#

Insert multiple rows into a SQLite table.

Parameters: - db_path (str): Path to the SQLite database file. - table_name (str): Name of the target table. - column_names (list of str): List of column names to insert values into (e.g. [‘col1’, ‘col2’]). - rows_to_insert (list of tuples): Data to insert, e.g. [(‘a’, ‘b’), (‘c’, ‘d’)].