ppcpy.io#

ppcpy.io.loadConfigs#

ppcpy.io.loadConfigs.loadPicassoConfig(picasso_config_file, picasso_default_config_file)[source]#

load the general Picasso config file

Parameters:
picasso_config_filestr or path

the specific config file

picasso_default_config_filstr or path

the default (template) file

Returns:
picasso_config_dict
ppcpy.io.loadConfigs.readPollyNetConfigLinkTable(polly_config_table_file, timestamp, device)[source]#
ppcpy.io.loadConfigs.fix_indexing(config_dict, keys=['first_range_gate_indx', 'bgCorRangeIndx', 'bgCorRangeIndxLow', 'bgCorRangeIndxHigh', 'LCMeanMinIndx', 'LCMeanMaxIndx'])[source]#
ppcpy.io.loadConfigs.getPollyConfigfromArray(polly_config_array, picasso_config_dict)[source]#

function to load the config for the time identified

aim is to declutter the runscript

Parameters:
polly_config_arraypandas dataframe

selected line form the links.xlsx

picasso_config_dictdict

general picasso config

Returns:
polly_config_dictdict
ppcpy.io.loadConfigs.loadPollyConfig(polly_config_file, polly_default_config_file)[source]#
ppcpy.io.loadConfigs.checkPollyConfigDict(polly_config_dict: dict) dict[source]#

Check and potentially modify polly config dict

Parameters: - polly_config_dict (dict): polly config dict to be checked Output: - new_polly_config_dict (dict): checked (and modified) polly config dict

ppcpy.io.readMeteo#

class ppcpy.io.readMeteo.Meteo(meteorDataSource: str, meteo_folder: str, meteo_file: str)[source]#

Initialise Meteo object.

Parameters:
meteorDataSourcestr

Meteorological data source name eg. “nc_cloudnet”.

meteo_folderstr

Path to meteorological data folder.

meteo_filestr

Meteorological data file name.

Notes

History - 2021-05-22: First edition by Zhenping.

load(times: float, heights: ndarray, asl: float, flagPicassoComparison: bool = False)[source]#

Load the data and resample to 15 minute intervals.

Parameters:
timesfloat

Date of measurement [Unix time].

heightsnp.ndarray

Height above ground [m].

aslfloat

Altitude of measurement site (station) [m.a.s.l.].

flagPicassoComparisonbool

If true, use Picasso values and logic.

get_mean_profiles(time_slice: list) list[source]#

Get the mean meteorological profiles.

Parameters:
time_slicelist

Nested list of time slices in np.datetime [[start time, end time], ..]

Returns:
mean_profileslist

Mean meteorological profiles for each time slice.

class ppcpy.io.readMeteo.MeteoNcCloudnet(basepath, filepattern)[source]#

TODO for now only one filename define preferred model

Initialize MeteoNcCloudnet object.

Parameters:
basepathstr

Path to meteorological data folder.

filepatternstr

Meteorological data file name.

Notes

History - xxxx-xx-xx: First edition by …

radiusEarth: float = 6371200.0#
acelerationEarth: float = 9.80665#
find_path_for_time(time: float) str[source]#

Find the files for a given time.

Returns:
str

Path to the file for the given time.

load(time: float, height_grid: ndarray, station_altitude: float, flagPicassoComparison: bool = False) Dataset[source]#

Load the data.

Parameters:
timefloat

Date of measurement [Unix time].

height_gridfloat

Height above ground [m].

station_altitudefloat

Altitude of measurement site (station) [m.a.s.l.].

flagPicassoComparisonbool

If True, use Picasso values and logic.

Returns:
ds_newDataset

Dataset of meteorology profiles (temperature, pressure, relative humidity, specific humidity).

Notes

not quite sure on the interface yet ` met.load(data_cube.retrievals_highres['time'][0]) met.load(datetime.datetime.timestamp(datetime.datetime.strptime(data_cube.date, '%Y%m%d'))) `

Recipie:
  • load

  • select variables?

  • rename?

  • regrid from (time, level) to (time, lidar heights)

Todo

clarify the above ground above sea level issues.

Todo

Check how the interpolation handles negative grid points

Todo

Check if we can interpolate several variables at once

ppcpy.io.readPollyRawData#

ppcpy.io.readPollyRawData.readPollyRawData(filename: str) dict[source]#

read the Polly raw file

Parameters:
filenamestr
Returns:
data_dictdict

ppcpy.io.write2nc#

ppcpy.io.write2nc.get_git_info(path='.')[source]#
ppcpy.io.write2nc.adding_fixed_vars(data_cube, json_nc_mapping_dict)[source]#
ppcpy.io.write2nc.adding_global_attr(data_cube, json_nc_mapping_dict)[source]#
ppcpy.io.write2nc.write_channelwise_2_nc_file(data_cube, root_dir=PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls=[])[source]#
ppcpy.io.write2nc.write2nc_file(data_cube, root_dir=PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls=[])[source]#
ppcpy.io.write2nc.write_profile2nc_file(data_cube, root_dir: str = PosixPath('/mnt/c/Users/radenz/dev/PicassoPy/PicassoPy'), prod_ls: list = [], collect_debug: bool = False)[source]#

Saving profile data to NetCDF4 files

Parameters:
data_cubeobject

Main PicassoProc object

root_dirstr
prod_lslist

List of product names

.. TODO::

Missing comment in variable attributes. Not all retrievals / information needed for the profiles are in data_cube.retrivals_highres… write docstring

ppcpy.io.write2nc.adding_mol_profiles(data_cube, json_nc_mapping_dict: dict, cldFreeGrp: int) dict[source]#

Temporarily quick fix for adding molecular profiles as variables to the NetCDF profile outputs

ppcpy.io.sql_interaction#

ppcpy.io.sql_interaction.string_to_ts(s)[source]#

string of format %Y-%m-%d %H:%M:%S to timestamp (timezone-aware)

ppcpy.io.sql_interaction.get_from_sql_db(db_path: str, table_name: str, ts_interval: list[str]) dict[source]#

read lidar calibration constant or depol calibration from database

Parameters:
db_pathstr

name of the specific sqlite db file.

table_namestr

default ‘lidar_calibration_constant’

ts_intervalstr

the date or timestamp to look for

Returns:
dict

in calibration storage format

ppcpy.io.sql_interaction.prepare_for_sql_db_writing(data_cube, parameter: str, method: str) list[tuple][source]#

Collect all necessary variable and save it to a list of tuples for inserting into a SQLite table.

Parameters:
data_cubeobject
parameter :str

LC or DC

methodstr

klett or raman

Returns:
rows_to_insertlist of tuples
ppcpy.io.sql_interaction.setup_empty(db_path: str, table_name: str, column_names: list[str], data_types: list[str], unique: str = '')[source]#

Create/Initialise an empty database.

Parameters:
db_pathstr

Path to the SQLite database file.

table_namestr

Name of the target table.

column_nameslist of str

List of column names to insert values into (e.g. [‘col1’, ‘col2’]).

data_typeslist of str

List of SQLite data types for each respective columns (e.g. [‘text’, ‘real’])

ppcpy.io.sql_interaction.write_rows_to_sql_db(db_path: str, table_name: str, column_names: list[str], rows_to_insert: list[str])[source]#

Insert multiple rows into a SQLite table.

Parameters:
db_pathstr

Path to the SQLite database file.

table_namestr

Name of the target table.

column_nameslist of str

List of column names to insert values into (e.g. [‘col1’, ‘col2’]).

rows_to_insertlist of tuples

Data to insert, e.g. [(‘a’, ‘b’), (‘c’, ‘d’)].

Notes

The IGNORE syntax somehow did not work. With the UNIQUE colums defined and INSERT OR REPLACE at least the new values are updated. Though they are given a new ID.