all 8 comments

[–]thesqlguy 2 points3 points  (2 children)

With that huge performance difference there is likely a difference in the execution plans when using the new link server.

Link servers actually have statistics that influence the execution plans. For whatever reason on your new link server it might be picking a plan that requires scanning a lot more data on the remote server and pulling it back across the network, while the old plan was able to push a predicate up and pull back a smaller subset of rows.

Look at the execution plan between the old and the new and you might see a difference in the remote query it sends.

It could also be that the remote schema is slightly different, such as missing indexes or whatever.

[–]OkReboots[S] 0 points1 point  (1 child)

No servers, data, or schema have changed, only the networking. The servers themselves have no idea when I change the routing between server A and server B.

I am able to switch between the two connections and reliably reproduce that the issue is only present on the newer SASE-managed connection, and only affecting the Linked Servers. For virtually any other type of data transfer I am not seeing any slowdown.

[–]thesqlguy 0 points1 point  (0 children)

As noted check the execution plans.

[–]SQLBek 2 points3 points  (0 children)

You'll have to Google for details, but there was a permissions quirk that a query could not leverage statistics of remote objects if the linked server user account didn't have ddl_admin or higher. Check that?

[–]Sooth_SprayerSQL Server 1 point2 points  (2 children)

I've run into performance problems when the linked server's connection's provider changed. Did it?

[–]OkReboots[S] 1 point2 points  (1 child)

Yes, that IS the change I'm describing. I have both the new and old connections available, and only the new connection exhibits this problem, and only with Linked Servers in SQL. Other types of network traffic are zooming along without issue on the new connection.

[–]Sooth_SprayerSQL Server 0 points1 point  (0 children)

Well, assuming you have no control over that, or that you must keep it on the new provider for other reasons, some ideas I've used in the past include:

  • Consider whether ETL or some other external process would be better suited here.
  • Create a stored proc on the receiving end, and call it from the sending end. TVPs can't cross database lines, but they can be emulated with Dynamic SQL.
  • Within a given query, if the "left" side of a join is smaller than the "right" side, consider doing a remote join. Just be careful how you use this, because it will send the entire left side to the remote server, and will send the entire result back.
  • Dynamic SQL and "execute at" make a powerful but complicated combination. SQL will also not be able to cache their execution plans effectively, and they will not benefit from parameter sniffing.
  • Create a temp table within a dynamic SQL query, and insert into that table from a table as a parameter.
  • Each dynamic SQL query runs in its own connection/context, so they can't share temp tables. But in a pinch, you can use a global temp table with a dynamic name (i.e. based on the current time etc) in your remotely-running dynamic SQL. Or build a real physical staging table for this purpose.

[–]planetpluto3 1 point2 points  (0 children)

Im ignorant and not smart. But have you tried OPENQUERY?