Our full technical support staff does not monitor this forum. If you need assistance from a member of our staff, please submit your question from the Ask a Question page.


Log in or register to post/reply in the forum.

Sparse files for loggernet in Linux


LGS Sep 1, 2021 07:03 PM

Hi,

If I estimate the size on disk of log/sys files of loggernet under linux, I have:

du -hs /var/opt/CampbellSci/                 
487M    /var/opt/CampbellSci/

But the size of the files is quite higher:


du -hs --apparent-size /var/opt/CampbellSci/
13G     /var/opt/CampbellSci/

It is very annoying as making a synchronisation of 487M is not the same as 13G. Sparse files are not correctly managed by usual synchronisation or copy program (rsync,unison,scp).

Is there a way to avoid those sparse files?

Regards,


LGS Sep 2, 2021 10:13 AM

As additional information, the sparse files are in:

/var/opt/CampbellSci/LoggerNet/sys/bin/data/x

where x is a number. Only three of those directories contains sparse files:

du -sh --apparent-size /var/opt/CampbellSci/LoggerNet/sys/bin/data/*
3.9G    /var/opt/CampbellSci/LoggerNet/sys/bin/data/10
3.9G    /var/opt/CampbellSci/LoggerNet/sys/bin/data/12
22M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/14
6.4M    /var/opt/CampbellSci/LoggerNet/sys/bin/data/16
11M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/18
7.0M    /var/opt/CampbellSci/LoggerNet/sys/bin/data/19
22M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/2
25M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/21
23M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/23
22M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/25
22M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/27
11M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/29
18M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/3
21M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/35
14M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/37
21M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/39
2       /var/opt/CampbellSci/LoggerNet/sys/bin/data/40
563K    /var/opt/CampbellSci/LoggerNet/sys/bin/data/5
22M     /var/opt/CampbellSci/LoggerNet/sys/bin/data/7
3.9G    /var/opt/CampbellSci/LoggerNet/sys/bin/data/9

These directories seems to be linked to three stations that have a value higher than zero for the "Uncoll Holes" in the Status Monitor. This is not the case for the other stations. Is there a way to solve this?


jtrauntvein Sep 2, 2021 11:11 PM

The files that you mention above are those related to the data cache that loggernet keeps for collected data which allows collected data to be distributed to one or more clients without having to recollect that data from the datalogger.  If you are using rsync to back up loggernet or even to transfer loggernet from one host to another, you would likely be better off using the create-snapshot command in corascript (/opt/CampbellSci/LoggerNet/cora_cmd) to generate a single file that can be transfered.  This can also be done using the set up screen connected from a windows host.  One of the advantages to this is that it gives you the option to exclude the cache table storage files from the backup image if that historic data is not that important to you (it is not generally that important to people that rely on the data files written by loggernet during collection).  This would greatly reduce the amount of data that would need to be transferred and will preserve your configuration and settings.  This feature also has the advantage of ensuring that all of the config files being backed up are consistent with one another.  The backup image can be applied to another loggernet instance, by the way, using corascript's restore-snapshot command.  Information on using these commands can be found in corascript's help file (/opt/CampbellSci/LoggerNet/cora_cmd.html).

Given the size of the directories that you shows above, I also suspect that the cache table files have been sized too large.  LoggerNet has had a device setting called max-cache-table-size that specifies the maximum allowed space, in bytes, that the data portion of a cache table should consume.  The default value for this setting is 2 MiB.  Unfortunately, there has been a long standing issue in LoggerNet versions older than version 4.7 that led to the circumstance of this setting being ignored when the number of records allocated for a cache table is being set.  This has been addressed in version 4.7 and newer so I would recommend updating to that version of the package if you have not already done so. 

The only time that this setting has any effect is when a new datalogger table has been created.  Since the set of tables already created is likely much larger than the set of non-created (undead?), this fix does not help.  There is, however, yet another command available only in corascript called resize-tables-auto that will iterate through all stations and tables in the network map and resize (shrink) any tables that are larger than the max-cache-table-size setting value for that station.  It also has an "--audit" option that allows you to see what changes would be made if the command was run without that option set.

I hope that this helps.


netskink Sep 24, 2021 05:15 PM

I can't help you, but I am writing because you are using linux. Are you using linux as a mounted drive or ftp/scp sink?  I am using pc400 on windows.  I looked at loggernet and it only runs on windows as well.


jtrauntvein Sep 25, 2021 03:20 PM

Netskink,

There are versions of LoggerNet available that have been compiled for both Debian and Red Hat distributions.  These packages do not contain the entire suite as many of the client applications have not been ported.  That can connect, however, to remote instance of ln/Linux.  I also know that the windows clients have been run using wine or a VM.  In case you are interested, you can find more information on the LoggerNet for Linux product on https://www.campbellsci.com/lnlinux


LGS Mar 24, 2022 05:50 AM

Hi,

Just to confim that a cora script resizing the tables to 10000 solved the problem (using the command "resize-table" as "resize-tables-auto" is not available in my version 4.6-11). But it has to be done regularly because the files end by being sparse and very huge.

Log in or register to post/reply in the forum.