新聞中心
隨著深度學(xué)習(xí)和計(jì)算機(jī)視覺技術(shù)的不斷發(fā)展,3D攝像頭的市場需求也日益增長,而Zed相機(jī)則是其中的佼佼者。Zed相機(jī)能夠同時獲取左右兩個攝像頭的圖像,并將這些圖像合成為一個深度感知圖像。這一功能使得它在視覺測距和深度感知方面非常出色,被廣泛應(yīng)用于機(jī)器人、虛擬現(xiàn)實(shí)、無人駕駛和安防等領(lǐng)域。而在使用Zed相機(jī)的過程中,由于其內(nèi)核的適配性問題,很多用戶遇到了困難。本文將對進(jìn)行分析,并提供相關(guān)解決方案。

Zed相機(jī)的基礎(chǔ)系統(tǒng)是Linux,因此,它的內(nèi)核版本是至關(guān)重要的。目前,官方支持的內(nèi)核版本有Ubuntu 16.04和18.04以及Windows 10(64位)。如果您使用的是其他Linux發(fā)行版,例如CentOS、Debian等,那么您需要確保內(nèi)核版本和缺失的依賴項(xiàng)與Zed相機(jī)相匹配,否則可能會出現(xiàn)不兼容和無法識別的情況。
Zed相機(jī)需要從NVIDIA的官方網(wǎng)站下載并安裝CUDA和cuDNN。CUDA是NVIDIA提供的并行計(jì)算平臺和編程模型,可以加速GPU上的計(jì)算,包括深度神經(jīng)網(wǎng)絡(luò)、信號和圖像處理等。而cuDNN是NVIDIA提供的深度神經(jīng)網(wǎng)絡(luò)(DNN)庫,用于加速DNN的前向和反向推斷。如果您已經(jīng)安裝并配置了CUDA和cuDNN,那么Zed相機(jī)的安裝相對簡單。但如果您是之一次安裝,那么建議先參考CUDA和cuDNN的官方文檔進(jìn)行配置,以確保其與Zed相機(jī)的兼容性。
第三,Zed相機(jī)對于Linux內(nèi)核的編譯和安裝有一定的要求。例如,必須安裝Qt和SDL2等依賴項(xiàng)。Qt是跨平臺的應(yīng)用程序和UI框架,可用于創(chuàng)建圖形化應(yīng)用程序。SDL2則是一個用于創(chuàng)建視頻游戲等“多媒體”應(yīng)用程序的庫。在編譯和安裝Zed相機(jī)之前,您需要先安裝這些依賴項(xiàng),以確保它正常工作。
如果您仍然無法識別Zed相機(jī),那么還可以嘗試調(diào)整其設(shè)備權(quán)限。在Linux中,設(shè)備權(quán)限是重要的安全措施,用于控制哪些程序可以訪問哪些設(shè)備。如果Zed相機(jī)被其它程序占用,或者其權(quán)限設(shè)置不正確,那么可能會導(dǎo)致無法識別的問題。此時,可以使用以下命令:
“`bash
sudo chmod 666 /dev/video0
“`
這將使視頻設(shè)備(例如Zed相機(jī))的權(quán)限可讀可寫,并允許其他應(yīng)用程序訪問它。
:
Zed相機(jī)是一個非常強(qiáng)大的3D攝像頭,可以廣泛應(yīng)用于機(jī)器人、虛擬現(xiàn)實(shí)、無人駕駛和安防等領(lǐng)域。但是,在使用Zed相機(jī)的過程中,有時可能會遇到內(nèi)核適配性問題。為了解決這些問題,您需要確保內(nèi)核版本和缺失的依賴項(xiàng)與Zed相機(jī)相匹配,安裝并配置CUDA和cuDNN等依賴項(xiàng),安裝Qt和SDL2等依賴項(xiàng),并調(diào)整設(shè)備權(quán)限。這些都是確保Zed相機(jī)在Linux系統(tǒng)中正常工作的關(guān)鍵步驟。
成都網(wǎng)站建設(shè)公司-創(chuàng)新互聯(lián)為您提供網(wǎng)站建設(shè)、網(wǎng)站制作、網(wǎng)頁設(shè)計(jì)及定制高端網(wǎng)站建設(shè)服務(wù)!
proxmox ve — ZFS on Linux
ZFS是由Sun Microsystems設(shè)計(jì)的一個文件系統(tǒng)和邏輯卷管理器的組合。從proxmox ve 3.4開褲稿始,zfs文件系統(tǒng)的本機(jī)Linux內(nèi)核端口作為可選文件系統(tǒng)引入,并作為根文件系統(tǒng)的附加選擇。不需要手動編譯ZFS模塊-包括所有包。
通過使用zfs,它可以通過低硬件預(yù)算花銷實(shí)現(xiàn)更大的企業(yè)功能,并且可以通過利用SSD緩存或純使用SSD來饑哪實(shí)現(xiàn)高性能系統(tǒng)。ZFS可以通過適度的CPU和內(nèi)存負(fù)載以及簡單的管理來取代成本高昂的硬件RAID卡。
General ZFS advantages
ZFS很大程度上依賴于內(nèi)存,因此至少需要8GB才能啟動。爛純碼在實(shí)踐中,盡可能多地使用高配置硬件。為了防止數(shù)據(jù)損壞,我們建議使用高質(zhì)量的ECC RAM。
如果使用專用緩存和/或日志磁盤,則應(yīng)使用企業(yè)級SSD(例如Intel SSD DC S3700系列)。這可以顯著提高整體性能。
If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don’t use virtio for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with virtio SCSI controller type).
When you install using the Proxmox VE installer, you can choose ZFS for the root file system. You need to select the RAID type at installation time:
| RAID0
|
Also called “striping”. The capacity of such volume is the sum of the capacities of all disks. But RAID0 does not add any redundancy, so the failure of a single drive makes the volume unusable.
|
| RAID1
|
Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk.
|
| RAID10
|
A combination of RAID0 and RAID1. Requires at least 4 disks.
|
| RAIDZ-1
|
A variation on RAID-5, single parity. Requires at least 3 disks.
|
| RAIDZ-2
|
A variation on RAID-5, double parity. Requires at least 4 disks.
|
| RAIDZ-3
|
A variation on RAID-5, triple parity. Requires at least 5 disks.
|
The installer automatically partitions the disks, creates a ZFS pool called rpool, and installs the root file system on the ZFS subvolume rpool/ROOT/pve-1.
Another subvolume called rpool/data is created to store VM images. In order to use that with the Proxmox VE tools, the installer creates the following configuration entry in /etc/pve/storage.cfg:
zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir
After installation, you can view your ZFS pool status using the zpool command:
# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
errors: No known data errors
The zfs command is used configure and manage your ZFS file systems. The following command lists all file systems after installation:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool.94G 7.68T 96K /rpool
rpool/ROOTM 7.68T 96K /rpool/ROOT
rpool/ROOT/pveM 7.68T 702M /
rpool/dataK 7.68T 96K /rpool/data
rpool/swap.25G 7.69T 64K –
Depending on whether the system is booted in EFI or legacy BIOS mode the Proxmox VE installer sets up either grub or systemd-boot as main bootloader. See the chapter on Proxmox VE host bootladers for details.
This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands to manage ZFS are zfs and zpool. Both commands come with great manual pages, which can be read with:
# man zpool
To create a new pool, at least one disk is needed. The ashift should have the same sector-size (2 power of ashift) or larger as the underlying disk.
zpool create -f -o ashift=12
To activate compression
zfs set compression=lz4
Minimum 1 Disk
zpool create -f -o ashift=12
Minimum 2 Disks
zpool create -f -o ashift=12 mirror
Minimum 4 Disks
zpool create -f -o ashift=12 mirror mirror
Minimum 3 Disks
zpool create -f -o ashift=12 raidz1
Minimum 4 Disks
zpool create -f -o ashift=12 raidz2
It is possible to use a dedicated cache drive partition to increase the performance (use SSD).
As it is possible to use more devices, like it’s shown in “Create a new pool with RAID*”.
zpool create -f -o ashift=12 cache
It is possible to use a dedicated cache drive partition to increase the performance(SSD).
As it is possible to use more devices, like it’s shown in “Create a new pool with RAID*”.
zpool create -f -o ashift=12 log
If you have an pool without cache and log. First partition the SSD in 2 partition with parted or gdisk
| Always use GPT partition tables. |
The maximum size of a log device should be about half the size of physical memory, so this is usually quite all. The rest of the SSD can be used as cache.
zpool add -f log cache
Changing a failed device
zpool replace -f
Changing a failed bootable device when using systemd-boot
sgdisk -R
sgdisk -G
zpool replace -f
pve-efiboot-tool format
pve-efiboot-tool init
| ESP stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the Proxmox VE installer since version 5.4. For details, see Setting up a new partition for use as synced ESP . |
ZFS comes with an event daemon, which monitors events generated by the ZFS kernel module. The daemon can also send emails on ZFS events like pool errors. Newer ZFS packages ships the daemon in a separate package, and you can install it using apt-get:
# apt-get install zfs-zed
To activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your favourite editor, and uncomment the ZED_EMAIL_ADDR setting:
ZED_EMAIL_ADDR=”root”
Please note Proxmox VE forwards mails to root to the email address configured for the root user.
| The only setting that is required is ZED_EMAIL_ADDR. All other settings are optional. |
It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in /etc/modprobe.d/zfs.conf and insert:
options zfs zfs_arc_max=
This example setting limits the usage to 8GB.
|
If your root file system is ZFS you must update your initramfs every time this value changes:
update-initramfs -u
|
Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage.
We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is preferred to create a partition on a physical disk and use it as swapdevice. You can leave some space free for this purpose in the advanced options of the installer. Additionally, you can lower the “swappiness” value. A good value for servers is 10:
sysctl -w vm.swappiness=10
To make the swappiness persistent, open /etc/sysctl.conf with an editor of your choice and add the following line:
vm.swappiness = 10
Table 1. Linux kernel swappiness parameter values
vm.swappiness = 0
|
The kernel will swap only to avoid an out of memory condition
|
|
vm.swappiness = 1
|
Minimum amount of swapping without disabling it entirely.
|
|
vm.swappiness = 10
|
This value is sometimes recommended to improve performance when sufficient memory exists in a system.
|
|
vm.swappiness = 60
|
The default value.
|
|
vm.swappiness = 100
|
The kernel will swap aggressively.
|
ZFS on Linux version 0.8.0 introduced support for native encryption of datasets. After an upgrade from previous ZFS on Linux versions, the encryption feature can be enabled per pool:
# zpool get feature@encryption tank
NAME PROPERTYVALUESOURCE
tank feature@encryption disabledlocal
NAME PROPERTYVALUESOURCE
tank feature@encryption enabledlocal
| There is currently no support for booting from pools with encrypted datasets using Grub, and only limited support for automatically unlocking encrypted datasets on boot. Older versions of ZFS without encryption support will not be able to decrypt stored data. |
| It is recommended to either unlock storage datasets manually after booting, or to write a custom unit to pass the key material needed for unlocking on boot to zfs load-key. |
| Establish and test a backup procedure before enabling encryption of production data.If the associated key material/passphrase/keyfile has been lost, accessing the encrypted data is no longer possible. |
Encryption needs to be setup when creating datasets/zvols, and is inherited by default to child datasets. For example, to create an encrypted dataset tank/encrypted_data and configure it as storage in Proxmox VE, run the following commands:
# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
Enter passphrase:
Re-enter passphrase:
All guest volumes/disks create on this storage will be encrypted with the shared key material of the parent dataset.
To actually use the storage, the associated key material needs to be loaded with zfs load-key:
# zfs load-key tank/encrypted_data
Enter passphrase for ‘tank/encrypted_data’:
It is also possible to use a (random) keyfile instead of prompting for a passphrase by setting the keylocation and keyformat properties, either at creation time or with zfs change-key on existing datasets:
# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
| When using a keyfile, special care needs to be taken to secure the keyfile against unauthorized access or accidental loss. Without the keyfile, it is not possible to access the plaintext data! |
A guest volume created underneath an encrypted dataset will have its encryptionroot property set accordingly. The key material only needs to be loaded once per encryptionroot to be available to all encrypted datasets underneath it.
See the encryptionroot, encryption, keylocation, keyformat and keystatus properties, the zfs load-key, zfs unload-key and zfs change-key commands and the Encryption section from man zfs for more details and advanced usage.
zynq運(yùn)行操作系統(tǒng)之linux kernel編譯
然后拷貝出arch/arm/boot/uImage 到SD卡即可
Linux with HDMI video output on the ZED, ZC702 and ZC706 boards
ADV7511 HDMI tranitter Linux Driver
Building the Zynq Linux kernel and devicetrees from source
axiiic
關(guān)于zed相機(jī)適用的linux內(nèi)核的介紹到此就結(jié)束了,不知道你從中找到你需要的信息了嗎 ?如果你還想了解更多這方面的信息,記得收藏關(guān)注本站。
香港云服務(wù)器機(jī)房,創(chuàng)新互聯(lián)(www.cdcxhl.com)專業(yè)云服務(wù)器廠商,回大陸優(yōu)化帶寬,安全/穩(wěn)定/低延遲.創(chuàng)新互聯(lián)助力企業(yè)出海業(yè)務(wù),提供一站式解決方案。香港服務(wù)器-免備案低延遲-雙向CN2+BGP極速互訪!
網(wǎng)站欄目:Zed相機(jī)的Linux內(nèi)核適配性(zed相機(jī)適用的linux內(nèi)核)
當(dāng)前鏈接:http://www.dlmjj.cn/article/dhcspeh.html


咨詢
建站咨詢
