VDR Recordings Play Back via XBMC
#31
for live tv vnsi sends frame by frame which my be smaller than 32k. writing 32k to tcp does not mean that this chunk is sent this size. tcp asks the lower layer for mtu (max transferable unit) and should respect this. tcp also takes care that if a packet gets lost or corrupted, it is re-send.
what network cards do you have in use?
Reply
#32
I have been playing with the MTU size. I have increased the MTU size to 8000.
I'm using the on-board motherboard network interface.
On the server, I have Ethernet controller: Qualcomm Atheros AR8121/AR8113/AR8114 Gigabit or Fast Ethernet (rev b0).

This controller allows the changing of MTU size, but doesn't have TCP offload support and other nice features.
I am now investigating if my switches and hubs also support Jumbo packets.
I have dumb 10/100/1000 switches which might not support Jumbo packets.

Thanks for the advice.
I will get back to you when I have more information to report.
Reply
#33
Well, I have certainly learned a lot from this exercise.

1) I learned that I had one 1000mbps switch (netgear gs105) which doesn't support jumbo packets.
2) I learned that I needed to set the server MTU to 8000 (maximum allowed by my NIC on motherboard).
3) I learned that I needed to set the client MTU to 7000 (maximum allowed by the NIC on motherboard).

I swapped out that netgear gs105 with another switch that supported jumbo packets.
I learned how to use ping -M do -s 7000 <endpoint>

I had to play with the MTU on the client. It should support 8000, but found it unreliable and had to downgrade to 7000.

Now the videos play just fine using LIVE TV -> recordings.

That was fun!!

Thanks for all your help and the wonderful kodi, vdr software.
It's really awesome stuff.

Regards,

Jim
Reply
#34
Good to see that it's working now. Having fixed this by yourself you can enjoy even more Smile
Reply
#35
Unfortunately, we're not quite done.

I thought you would be interested in this.
If I tune the MTU on the client and server properly (MTU=7000 works best), I can get tv->recordings to play as I would expect.

However, with these MTU settings, Live TV now fails to work.

If you have any ideas on this, let me know.

I will get back to this thread on what I learn.
Reply
#36
Did you optimize MTU size in all directions or did you try to maximize it? I would have expected a much smaller size around 1500.
Reply
#37
The default MTU size was 1500 on both server and client.
I increased the MTU size on the server to 8000 thinking that was all that is needed (big packets from server to client).
Increasing the MTU on just the server wasn't enough.
I then played with increasing the MTU on the client too.
I played with both until I found a setting that finally worked for playing recordings, this was a value of 6000 on both server and client.
These settings now don't work with live-tv.

The server that runs vdr also runs my mail server which interfaces with the internet (both inbound and outbound email).
I found that changing the MTU on the server from 1500 broke the mail server interface with the internet.
Outbound mail with large attachments were not getting to the internet outbound mail server.
This is a different story and has nothing to do with VDR.

My network configuration is very simple.
I have an unmanaged (dumb) 1000G switch which connects VDR server, all clients, and broadband internet router.
I had to replace the unmanaged 1000G switch from one that didn't support Jumbo packets to one that did support Jumbo packets to get to where I am now.

I think what I'm going to eventually need to do is replace my dumb 1000G switches with managed switches such that I can create a VLAN for VDR services.
I'm doing lots or reading now :-)
Reply
#38
That's kind of strange because size of MTU should not matter. The tcp layer has to respect MTU and split bigger chunks to fit into mtu size. My GB switch was very cheap, no-name brand. I don't think it is superior is any aspect to the netgear gs105.
If changing MTU size on the server breaks anything there must be a software issue in the network stack on the server. It would surprise me if it was Fedora becuase I would have gotten some more problem reports. What board is that on your server? Maybe a NIC driver issue.
Reply
#39
I will go back and further investigate networking issues.
Maybe I have some other bad equipment causing problems.

Do you run your kodi client on a different machine than your vdr server?
Do you have both server and client set to MTU size of 1500?

My server has this NIC controller on a ASUS P5QL-EM motherboard.
01:00.0 Ethernet controller: Qualcomm Atheros AR8121/AR8113/AR8114 Gigabit or Fast Ethernet (rev b0)

Which uses the atl1e stock driver that comes with fedora.

Below are my ethtool particulars for the server.
I have restored my MTU to the default of 1500.

linux> ifconfig em1
em1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.254 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::e2cb:4eff:fedd:eaf6 prefixlen 64 scopeid 0x20<link>
ether e0:cb:4e:dd:ea:f6 txqueuelen 1000 (Ethernet)
RX packets 12417238 bytes 6496613375 (6.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13875171 bytes 18832730902 (17.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 19 collisions 0

linux> ethtool em1
Settings for em1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Cannot get wake-on-lan settings: Operation not permitted
Current message level: 0x00000000 (0)

Link detected: yes
linux> ethtool -k em1
Features for em1:
rx-checksumming: off [fixed]
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: off [fixed]
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: off [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: on [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-mpls-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
rx-all: off
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
Reply
#40
Quote:Do you run your kodi client on a different machine than your vdr server?

I use both configurations. In my living room it runs on the same system and all my dev and test systems connect to different vdr server.

Quote:Do you have both server and client set to MTU size of 1500?

I did not set it explicitly but a query showed similar output to yours and MTU size of 1500.

This is the first case I heard that recording playback fails. I was thinking about the mtu size because I remembered some case in a company I worked for many years ago. Out of a sudden some programs started to fail on a particular server. It turned out that IT changed NIC. After they changed it back all was good again. The failing NIC reported a mtu > 1500.

Do you want to test this branch again? https://github.com/FernetMenta/vdr-plugi...r/tree/jim
It limits packet size to 8k.
Reply
#41
FYI, I'm going to buy a different NIC card for the server.
I need to confirm your suspicions about this NIC card.
Especially given that you have a similiar setup and don't have this issue.

I have attempted your latest patches to branch jim.

I hope I did this right.

I did a git pull and got the update.

I then rebuilt and installed.

Here is a portion of sys log

Jan 31 15:12:34 linux vdr: [385] loading plugin: /usr/local/lib/vdr/libvdr-vnsiserver.so.2.1.6
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/setup.conf
Jan 31 15:12:34 linux vdr: [385] ERROR: unknown config parameter: DumpNaluFill = 0
Jan 31 15:12:34 linux vdr: [385] ERROR: unknown config parameter: SupportTeletext = 1
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/sources.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/diseqc.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/scr.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/channels.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/timers.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/commands.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/reccmds.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/svdrphosts.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/remote.conf
Jan 31 15:12:34 linux vdr: [385] loading /etc/vdr/keymacros.conf
Jan 31 15:12:34 linux vdr: [385] DVB API version is 0x050A (VDR was built with 0x050A)
Jan 31 15:12:34 linux vdr: [385] frontend 0/0 provides ATSC,DVB-C with QAM64,QAM256,VSB8 ("LG Electronics LGDT3303 VSB/QAM Frontend")
Jan 31 15:12:34 linux vdr: [385] found 1 DVB device
Jan 31 15:12:34 linux vdr: [385] initializing plugin: vnsiserver (1.2.1): VDR-Network-Streaming-Interface (VNSI) Server

....


Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: amount: 32768
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: bytes_read: 32768
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: amount: 32768
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: bytes_read: 32768
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: amount: 20624
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: bytes_read: 20624
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: amount: 32768
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: bytes_read: 32768
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: amount: 32768
Jan 31 15:13:01 linux vdr: [644] VNSI: getBlock: bytes_read: 32768
J

Should I have expected getBlock: amout 8192 ??
Reply
#42
i did force push to this branch, hence git pull does not work. (personally I never use git pull)

use
git fetch and git reset --hard
Reply
#43
That worked, thanks.
Same issue, no change.

Let me go get a different NIC card.
I'll get back to you after that.

Thanks!

Jim
Reply
#44
What I meant by "that worked" was the git fetch and reset --hard to compile properly.
I now get the 8192 messages:

Jan 31 16:17:24 linux vdr: [8583] VNSI: getBlock: amount: 8192
Jan 31 16:17:24 linux vdr: [8583] VNSI: getBlock: bytes_read: 8192

But the videos still don't play.

Getting new NIC card...
Reply
#45
RESOLVED!!

This issue is indeed the NIC interface.
I have ordered a new one.

I noticed in wireshark that the packet sizes were larger than 1500, even though the MTU was set to 1500.
I then learned that wireshark looks at the packet interface above the NIC.
Then I learned that the NIC breaks up the packets to honor MTU.
Then I learned about tcp segmentation offoad.

I was able to fix the existing one by turning off tcp segmentation offload.
There must be a bug in the hardware.

I resolved it with ethtool.

sudo ethtool --offload em1 tso off

Afterwards I see all 1514 byte packets on wireshark.

Wow, how satisfying to get to the bottom of this issue!!
Reply

Logout Mark Read Team Forum Stats Members Help
VDR Recordings Play Back via XBMC0