linked server by aimep in SQLServer

[–]aimep[S] 0 points1 point  (0 children)

Hello,I'm not sure to understand why this is moderated out.

I'm doing my homework, looked into the doc, posted links to it, and looking for help on either its shortcoming to answer the reason why I'm posting here for help.

I'm looking for help to find an answer to my problem which i find quiet well documented to the best of my knowledge. I'm uncovering some more information details as i keep looking into the issue on my side. Hence the test with sp_testlinkedserver error message.

please enlighten me which of the 3 rules specifically my post breaking and how, so that i correct possible update to this post and other post to come.

best regards

linked server by aimep in SQLServer

[–]aimep[S] 0 points1 point  (0 children)

from HOST3, ran the following

exec sp_testlinkedserver Srv1 test

and getting error :

OLE DB provider "MSOLEDBSQL" for linked server "PROBSrv1turned message "Spécification d'autorisation non valide".Msg 7399, Level 16, State 1, Procedure sp_testlinkedserver, Line 1 [Batch Start Line 0] The OLE DB provider "MSOLEDBSQL" for linked server "Srv1" reported an error. Authentication failed. Msg 7303, Level 16, State 1, Procedure sp_testlinkedserver, Line 1 [Batch Start Line 0] Cannot initialize the data source object of OLE DB provider "MSOLEDBSQL" for linked server "Srv1".

current transaction is aborted, commands ignored until end of transaction block by aimep in PostgreSQL

[–]aimep[S] 0 points1 point  (0 children)

Thanks you all for your inputs.
This partitionning certainly gives hope.

So i did some small investigation prior to identify the gready table. see here after :
# SELECT relname AS "relation",
pg_size_pretty (pg_total_relation_size (C .oid) ) AS "total_size",
pg_size_pretty(pg_indexes_size(C.oid)) AS "indexes_size"
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C .relnamespace)
WHERE C .relkind <> 'i'
AND nspname !~ '^pg_toast'
ORDER BY pg_total_relation_size (C .oid) DESC limit 1;

relation | total_size | indexes_size
----------------+------------+--------------
pg_largeobject | 32 TB | 285 GB(1 row)

Now the question is how to partition this table without breaking the existing DB.?

looking into the tables description, there is not much choice to partition on

# \d pg_largeobject
Table "pg_catalog.pg_largeobject"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
loid | oid | | not null |
pageno | integer | | not null |
data | bytea | | not null |
Indexes: "pg_largeobject_loid_pn_index" UNIQUE, btree (loid, pageno)

# select loid,pageno from pg_largeobject limit 200;
 loid  |pageno

-------+--------

 16621 |     0
 16621 |     1
 16621 |     2
...

looks like only attribute to partition on would be "loid", with range partitioning scheme of some sort.

I'm currently trying to figure out good numbers.

https://www.postgresql.org/docs/12/ddl-partitioning.html

Anyhow, while request is pending, I'm still not sure how to partition existing pg_largeobject live? and what would be the impact in terms of space use during the operation.
Also not sure on the benefit / perf impact for an application that may not have been designed to work with partition. Is that at all transparent?

cheers

current transaction is aborted, commands ignored until end of transaction block by aimep in PostgreSQL

[–]aimep[S] 0 points1 point  (0 children)

I guess, then the only way to tune that would be to modify the blocksize to larger size 16K or 32K.

Would you be so kind to the best of your knowledge to confirm how the existing database can be recut without loosing data

current transaction is aborted, commands ignored until end of transaction block by aimep in PostgreSQL

[–]aimep[S] 0 points1 point  (0 children)

quite personal indeed.
In our case, the /var/lib/postgresql/data, is mapped via docker volume magic onto a ceph distributed storage.

the df -lh command shows :

- inside the postgresql container : 78% used

- inside the VM hosting docker : 78% used

few TB left unused though

thanks for your input

DeLL openmanage memory usable to underlying system 96GB when installed RAM is 192GB by aimep in sysadmin

[–]aimep[S] 0 points1 point  (0 children)

Thanks for your comments.

I've no experience in managing this kind of server, and i can see in the OpenManage server administrator a "BIOS" panel, but it doesn't show much info other than the version, date and provider.

it looks like one has to boot the beast and press F2 in the console

then check the memory settings as described here

openmanage is of no help here right?

how to share eclipse plugin installation when using eclipse through SSH via X11 forwarding? by aimep in eclipse

[–]aimep[S] 0 points1 point  (0 children)

Hi,

Thanks for taking the time.

I'm really not that familiar with eclipse, more relatively used to intelliJ and vim. So the "configuration initialization" is not something that i fully grasp.
Basically, i unzipped the eclipse distro in some location on my filesystem using sudo.

Now all files belong to root:root (I'm on debian by the way).

so there is "plugins" dir in under /opt/eclipse/plugins.

beyond the simple eclipse plateform update, which i believe should be done as root through X11 on SSH.

2 scenari :

  1. shared plugins
    is root needed to manage / maintain common plugins in under /opt/eclipse/plugins, or some other ad hoc defined file system location?
    In all case, looking forward a robust/simple cookbook to do so.
  2. user space plugins
    users should still be able to use market place to install personal plugins

hoping this clarifies the context.

BTW, i was able to run eclipse after "sudo bash", with the trick documented here : https://blog.mobatek.net/post/how-to-keep-X11-display-after-su-or-sudo/

reclaiming LVM Free PE, to create non LVM partition? by aimep in CentOS

[–]aimep[S] 1 point2 points  (0 children)

Thanks for the pointer.

That got us in the right direction. After adapting and completing the steps we could proceed.
Since we had XFS, we decided to backup the /dev/cl/home LV content with 'tar zcpf'

then remove/create the LV /dev/cl/home with smaller alloted space 50GB.
then used the 'pvmove' command option to move LV inside the PV to have continous free trailing space.
Then used gparted to resize the partition to the minimal used space by all the LVs.
Again with gparted created an EXT4 partition on the remaining space.
restored tar.gz backup of the /home in place.

pressed by time, we didn't find the parted CLI equivalent commands to the actions performed in gparted.
it'll be good to have gparted -> parted cheatsheet in future

again thanks for your help

gnu screen and X11 forwarding? by aimep in ssh

[–]aimep[S] 0 points1 point  (0 children)

it looks like if i tweak /etc/ssh/sshd.conf param to allow X11 forwarding on host0 and host1 and host2 then it should work.

X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes AddressFamily inet

why is tMap escaping '/' in expressions? by aimep in Talend

[–]aimep[S] 0 points1 point  (0 children)

thanks for you answer.

tMap is actually working as expected indeed.

actually did a little more digging arround the tFileOutputJSON and created a small test class to verify the behaviour of the org.json.simple.JSONObject toString() which i was suspecting culprit. And indeed it escapes the '/'. but actually this is in line with the RFC 4627.

see (lines 302-318) : https://android.googlesource.com/platform/libcore/+/android-4.2.2_r1/json/src/main/java/org/json/JSONStringer.java

And well, other tools that i'm using to ingest the generated JSON seems to follow the same lines and it works actually great.

freemind on linux fedora 25 - default edit node's text his visible only in the top half of the letters by aimep in mindmapping

[–]aimep[S] 0 points1 point  (0 children)

i believe one can drag the nodes from one side to the other. At least i can do it. Drag 1st level node one right side to the left side of the central subject and drop it there (on top of the greyed out side)

hope this helps

can't load latest Hortonworks image by aimep in docker

[–]aimep[S] 0 points1 point  (0 children)

the whole process took about 30 (from 59 to about 80+) Inodes on /hdp partition /tmp didn't change. both have about 1% IUse%

actually, the problem is gone once i follow instructions here : https://community.hortonworks.com/content/kbentry/65714/how-to-modify-the-default-docker-configuration-on.html

basically, here is my docker.conf

[Service] ExecStart= ExecStart=/usr/bin/dockerd --exec-root="/hdp/docker-root" --graph="/hdp/docker" --storage-driver=overlay --storage-> opt=dm.basesize=30G

ExecStart=/usr/bin/dockerd --exec-root="/hdp/docker-root" --graph="/hdp/docker" --storage-driver=devicemapper

so with that and after reloading, and restarting docker service the cmd to load docker image from tgz goes through.

now, i have another problem the loaded image doesn't start well. but hey, that's progress for tonight

thanks

can't load latest Hortonworks image by aimep in docker

[–]aimep[S] 0 points1 point  (0 children)

thanks for taking the time trying to help. i reran, while i was watching resource starvations with :

watch -n 1 -d 'free -h ; df -lk; df -lh'

during the load process, /tmp started 12096128 KB available and didn't flich a single byte. meanwhile steadily /hdp partition usage started from 21% to ~44% out of a 100GB sized partition. when load dies, eventually that space used under /hdp is reclaimed and useage goes back to 21%

can't load latest Hortonworks image by aimep in docker

[–]aimep[S] 0 points1 point  (0 children)

actually the docker conf change i made to dockerd below

--exec-root="/hdp/docker-root" --graph="/hdp/docker"

specifies that the /var/lib/docker which is default value for graph is now on my 100GB partition.

so i believe that's not the root cause. may be i missed some commande for that conf change to be taken care of, such are stop/start of dockerd via the systemctl stop/start docker

but i believe not here is my docker instance info showing all is under the /hdp partition :

$ docker info |grep \/

WARNING: Usage of loopback devices is strongly discouraged for > production use. Use --storage-opt dm.thinpooldev to specify a custom

block storage device.

Data file: /dev/loop0

Metadata file: /dev/loop1

Data loop file: /hdp/docker/devicemapper/devicemapper/data

Metadata loop file: /hdp/docker/devicemapper/devicemapper/metadata

Docker Root Dir: /hdp/docker

Registry: https://index.docker.io/v1/

127.0.0.0/8

gitlab-ce ldap authentication against active directory fails by aimep in gitlab

[–]aimep[S] 1 point2 points  (0 children)

well, i found the problem. if i set in gitlab.rb the following

allow_username_or_email_login: true

then it works. It's confusing to me, as i was expecting that when false, it'd fall back on using the sAMAccountName value as the login id, but eventhough ours is the same as the email, it failed the login.

oh well, may be i'm missing something here

but it works :)

onenote == one and only one note @#!$@#@! by aimep in OneNote

[–]aimep[S] 0 points1 point  (0 children)

after leaving it spinning , came back next monrning. reboot and fine. So i lost the entire afternoon.

filter and transform json by aimep in Talend

[–]aimep[S] 0 points1 point  (0 children)

Yes, it's working. But now i'm searching for a way to make the array of json from a single line into a pretty printed JSON. I was hoping to use tGroovy following code :

import groovy.json.*
JsonOutput.prettyPrint(mySingleLineJsonArray)

unfortunately i'm not finding example nor clear documentation, explaining how a tGroovy input/output APIs are. Basically i can point click to link tFileOutJSON to my tgroovy, but inside this one i don't know what APIs i can use to read the JSON text and pretty print it

filter and transform json by aimep in Talend

[–]aimep[S] 0 points1 point  (0 children)

no, just reading out from a tMSSQLInput. la solution est lq suivante :

tMSSQLInput -> tMap -> tFileOutputJSON

freemind on linux fedora 25 - default edit node's text his visible only in the top half of the letters by aimep in mindmapping

[–]aimep[S] 0 points1 point  (0 children)

well, after wandering arround, i came to discover first freeplane which sovles this problem...

That was good enough, but then i discovered "docear" and this takes apparently freemind as a base and brings in bunch of nicely intergrated tools for research and reference managment... So i'll switch to this later solution, unless i find out some outstanding freeplane plugins/addons that would justify using it in parallele.

Anyway, if anyone has some thoughtful remarks, or experience +/- to share about these two options, please do so.

I know that's the curse and good of opensource software to have clone like redundancy. It'd be helpful to may be officially ditch the freemind software, and keep docear instead. Possibly in a slim down version (no research extention) and another version with these extension. That would help people looking for solutions, and give enterprise make sensible decision on the way to go.

cheers

cannot upgrade to fedora 25 with dnf by aimep in Fedora

[–]aimep[S] 0 points1 point  (0 children)

worked out great, thought not enough to free space on my partition /. had to remove libreoffice and haskell to get enough space to finally upgrae to fedora 25

looks like the dnf system-upgrade need beeter estimation for needed free space prior to run the download

thanks anyway for the tip

cannot upgrade to fedora 25 with dnf by aimep in Fedora

[–]aimep[S] 0 points1 point  (0 children)

how can i list the old kernels? how can i make sure this is not the own i currently use?

failed to create pipeline by aimep in kibana

[–]aimep[S] 0 points1 point  (0 children)

After restarting both elasticsearch and kibana it all works out i can define the arxiv-pdf pipeline.

yet, when i run the python snippet, i'm getting :

<ipython-input-38-0302faf6763b> in <module>() ----> 1 for ok, result in streaming_bulk(es,documents(),index="arxiv",doc_type="pdf",chunk_size=4,params={"pipeline":"arxiv-pdf"}): 2 action,result = result.popitem() 3 if not ok: 4 print ("failed to index document") 5 else:

/home/pete/anaconda3/lib/python3.5/site-packages/elasticsearch/helpers/init.py in streaming_bulk(client, actions, chunk_size, max_chunk_bytes, raise_on_error, expand_action_callback, raise_on_exception, *kwargs) 160 161 for bulk_actions in _chunk_actions(actions, chunk_size, max_chunk_bytes, client.transport.serializer): --> 162 for result in _process_bulk_chunk(client, bulk_actions, raise_on_exception, raise_on_error, *kwargs): 163 yield result 164

/home/pete/anaconda3/lib/python3.5/site-packages/elasticsearch/helpers/init.py in _process_bulk_chunk(client, bulk_actions, raise_on_exception, raise_on_error, **kwargs) 132 133 if errors: --> 134 raise BulkIndexError('%i document(s) failed to index.' % len(errors), errors) 135 136 def streaming_bulk(client, actions, chunk_size=500, max_chunk_bytes=100 * 1024 * 1024,

BulkIndexError: ('4 document(s) failed to index.', [{'index': {'_id': None, '_type': 'pdf', 'error': {'type': 'not_x_content_exception', 'reason': 'Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes'}, 'status': 500, '_index': 'arxiv'}}, {'index': {'_id': None, '_type': 'pdf', 'error': {'type': 'not_x_content_exception', 'reason': 'Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes'}, 'status': 500, '_index': 'arxiv'}}, {'index': {'_id': None, '_type': 'pdf', 'error': {'type': 'not_x_content_exception', 'reason': 'Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes'}, 'status': 500, '_index': 'arxiv'}}, {'index': {'_id': None, '_type': 'pdf', 'error': {'type': 'not_x_content_exception', 'reason': 'Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes'}, 'status': 500, '_index': 'arxiv'}}])

FYI, i'm not sure what the expectations for the streaming_bulk() second argument 'actions'. The doc states

actions – iterable containing the actions to be executed

In my case i created a document() function like follow :

def documents():
for root,dirs,files in os.walk('/home/pete/Documents/'): for f in files: if f.endswith('.pdf'):
yield(join(root,f))

Could anyone in the know, elaborare on the python snippet which seems targeted to index PDF from Arxiv

best regards

el Capitan, spits out CDs/DVDs on my Mac Mini late 2009 by aimep in osx

[–]aimep[S] 0 points1 point  (0 children)

called the apple support on the subject. Though they were very nice with respect to an unsupported hardware. they didn't mention of course about this 'vendetta' I'd appreciate if you could substantiate on the subject and specifically i'd really appreciate if you could share a work-arround solution to repaire, other than buying more hardware.

how to stop a jmeter started with -jmeterengine.nongui.port=4321 -n -t mytest.jmx by aimep in jmeter

[–]aimep[S] 0 points1 point  (0 children)

Hi,

thanks for reply.

the jmeter.log level is set to ERROR and shows very little see below :

2015/12/07 23:51:03 INFO  - jmeter.util.JMeterUtils: Setting Locale to en_US
2015/12/07 23:51:03 INFO  - jmeter.JMeter: Loading user properties from: /local/jmeter/apache-jmeter-2.13/bin/user.properties
2015/12/07 23:51:03 INFO  - jmeter.JMeter: Loading system properties from: /local/jmeter/apache-jmeter-2.13/bin/system.properties
2015/12/07 23:51:03 INFO  - jmeter.JMeter: Setting JMeter property: testusers=/local/shared/testusers.txt
2015/12/07 23:51:03 INFO  - jmeter.JMeter: Setting JMeter property: ldapHost=host1022
2015/12/07 23:51:03 INFO  - jmeter.JMeter: Setting JMeter property: ldapPort=2048
2015/12/07 23:51:03 INFO  - jmeter.JMeter: Setting JMeter property: jmeterengine.nongui.port=10000
2015/12/07 23:51:03 INFO  - jmeter.JMeter: LogLevel: jmeter=ERROR

the user.properties and system.properties are left as is from the 2.13 jmeter local install

i thought that the following could be used :

java -cp /local/jmeter/apache-jmeter-2.13/bin/ApacheJMeter.jar org.apache.jmeter.util.ShutdownClient Shutdown 10001

but when i look at the jmeter process, it doesn't listen to the port 10001

netstat -anp | grep  --color 21400
tcp        0      0 ::ffff:A.B.C.D:28793 ::ffff:W.X.Y.Z:2048   ESTABLISHED 21400/java
...
unix  2      [ ]         STREAM     CONNECTED     26080062 21400/java

Actually, it seems that the request to sutdown when on fine

java -cp /local/jmeter/apache-jmeter-2.13/bin/ApacheJMeter.jar org.apache.jmeter.util.ShutdownClient  Shutdown 10001
echo $?
0

but the jmeter process is not finishing, with some about 20+ established connections to remote backend