If one need to use the sp_executesql
with OUTPUT
variables:
EXEC sp_executesql @sql
,N'@p0 INT'
,N'@p1 INT OUTPUT'
,N'@p2 VARCHAR(12) OUTPUT'
,@p0
,@p1 OUTPUT
,@p2 OUTPUT;
The biggest clue is the rows are all being returned on one line. This indicates line terminators are being ignored or are not present.
You can specify the line terminator for csv_reader. If you are on a mac the lines created will end with \r
rather than the linux standard \n
or better still the suspenders and belt approach of windows with \r\n
.
pandas.read_csv(filename, sep='\t', lineterminator='\r')
You could also open all your data using the codecs package. This may increase robustness at the expense of document loading speed.
import codecs
doc = codecs.open('document','rU','UTF-16') #open for reading with "universal" type set
df = pandas.read_csv(doc, sep='\t')
Both the answers provided above work fine.
If the command should work with both tabs and spaces as the delimiter I would use awk
:
awk '{print $100,$101,$102,$103,$104,$105}' myfile > outfile
As long as you just need to specify 5 fields it is imo ok to just type them, for longer ranges you can use a for
loop:
awk '{for(i=100;i<=105;i++)print $i}' myfile > outfile
If you want to use cut
, you need to use the -f
option:
cut -f100-105 myfile > outfile
If the field delimiter is different from TAB
you need to specify it using -d
:
cut -d' ' -f100-105 myfile > outfile
Check the man page for more info on the cut command.
To read characters try
scan("/PathTo/file.csv", "")
If you're reading numeric values, then just use
scan("/PathTo/file.csv")
scan
by default will use white space as separator. The type of the second arg defines 'what' to read (defaults to double()).
i can handle it like that ;
svg.selectAll("rect")
.data(zones)
.enter()
.append("rect")
.attr("id", function (d) { return "zone" + d.zone; })
.attr("class", "zone")
.attr("x", function (d, i) {
if (parseInt(i / (wcount)) % 2 == 0) {
this.xcor = (i % wcount) * zoneW;
}
else {
this.xcor = (zoneW * (wcount - 1)) - ((i % wcount) * zoneW);
}
return this.xcor;
})
and anymore you can find x coordinate
svg.select("#zone1").on("click",function(){alert(this.xcor});
I bumped into this problem lately with Windows 10 from another direction, and found the answer from @JonSkeet very helpful in solving my problem.
I also did som further research with a test form and found that when the the current culture was set to "no"
or "nb-NO"
at runtime (Thread.CurrentThread.CurrentCulture = new CultureInfo("no");
), the ToString("yyyy-MM-dd HH:mm:ss") call responded differently in Windows 7 and Windows 10. It returned what I expected in Windows 7 and HH.mm.ss in Windows 10!
I think this is a bit scary! Since I believed that a culture was a culture in any Windows version at least.
this works in sql server 2016
USE AdventureWorks
GO
DECLARE @listStr VARCHAR(MAX)
SELECT @listStr = COALESCE(@listStr+',' ,'') + Name
FROM Production.Product
SELECT @listStr
GO
The EOF
token must be at the beginning of the line, you can't indent it along with the block of code it goes with.
If you write <<-EOF
you may indent it, but it must be indented with Tab characters, not spaces. So it still might not end up even with the block of code.
Also make sure you have no whitespace after the EOF
token on the line.
it could be not the answer for this case, but as I had the same error-message with .to_csv
I tried .toCSV('name.csv')
and the error-message was different ("SparseDataFrame' object has no attribute 'toCSV'
). So the problem was solved by turning dataframe to dense dataframe
df.to_dense().to_csv("submission.csv", index = False, sep=',', encoding='utf-8')
So I figured out what is wrong with this statement:
Import-Csv H:\Programs\scripts\SomeText.csv |`
(Original)
Import-Csv H:\Programs\scripts\SomeText.csv -Delimiter "|"
(Proposed, You must use quotations; otherwise, it will not work and ISE will give you an error)
It requires the -Delimiter "|"
, in order for the variable to be populated with an array of items. Otherwise, Powershell ISE does not display the list of items.
I cannot say that I would recommend the |
operator, since it is used to pipe cmdlets into one another.
I still cannot get the if statement to return true and output the values entered via the prompt.
If anyone else can help, it would be great. I still appreciate the post, it has been very helpful!
I'm new to shell scripting too, but I had this same issue. Make sure at the end of your script you have a blank line. Otherwise it won't work.
Adm-zip has problems just compressing an existing archive https://github.com/cthackers/adm-zip/issues/64 as well as corruption with compressing binary files.
I've also ran into compression corruption issues with node-zip https://github.com/daraosn/node-zip/issues/4
node-archiver is the only one that seems to work well to compress but it doesn't have any uncompress functionality.
There is a huge difference between the below two:
If you do not restrict the rows, then the CONNECT BY clause would produce multiple rows and will not give the desired output.
Apart from Regular Expressions, a few other alternatives are using:
Setup
SQL> CREATE TABLE t (
2 ID NUMBER GENERATED ALWAYS AS IDENTITY,
3 text VARCHAR2(100)
4 );
Table created.
SQL>
SQL> INSERT INTO t (text) VALUES ('word1, word2, word3');
1 row created.
SQL> INSERT INTO t (text) VALUES ('word4, word5, word6');
1 row created.
SQL> INSERT INTO t (text) VALUES ('word7, word8, word9');
1 row created.
SQL> COMMIT;
Commit complete.
SQL>
SQL> SELECT * FROM t;
ID TEXT
---------- ----------------------------------------------
1 word1, word2, word3
2 word4, word5, word6
3 word7, word8, word9
SQL>
Using XMLTABLE:
SQL> SELECT id,
2 trim(COLUMN_VALUE) text
3 FROM t,
4 xmltable(('"'
5 || REPLACE(text, ',', '","')
6 || '"'))
7 /
ID TEXT
---------- ------------------------
1 word1
1 word2
1 word3
2 word4
2 word5
2 word6
3 word7
3 word8
3 word9
9 rows selected.
SQL>
Using MODEL clause:
SQL> WITH
2 model_param AS
3 (
4 SELECT id,
5 text AS orig_str ,
6 ','
7 || text
8 || ',' AS mod_str ,
9 1 AS start_pos ,
10 Length(text) AS end_pos ,
11 (Length(text) - Length(Replace(text, ','))) + 1 AS element_count ,
12 0 AS element_no ,
13 ROWNUM AS rn
14 FROM t )
15 SELECT id,
16 trim(Substr(mod_str, start_pos, end_pos-start_pos)) text
17 FROM (
18 SELECT *
19 FROM model_param MODEL PARTITION BY (id, rn, orig_str, mod_str)
20 DIMENSION BY (element_no)
21 MEASURES (start_pos, end_pos, element_count)
22 RULES ITERATE (2000)
23 UNTIL (ITERATION_NUMBER+1 = element_count[0])
24 ( start_pos[ITERATION_NUMBER+1] = instr(cv(mod_str), ',', 1, cv(element_no)) + 1,
25 end_pos[iteration_number+1] = instr(cv(mod_str), ',', 1, cv(element_no) + 1) )
26 )
27 WHERE element_no != 0
28 ORDER BY mod_str ,
29 element_no
30 /
ID TEXT
---------- --------------------------------------------------
1 word1
1 word2
1 word3
2 word4
2 word5
2 word6
3 word7
3 word8
3 word9
9 rows selected.
SQL>
Dead horse perhaps, but a while back I was trying to do the same and came across a script to create a STP that tried to do what I was looking for, but it had a few quirks that needed some attention. In an attempt to track down where I found the script to post an update, I came across this thread and it seemed like a good spot to share it.
This STP (Which for the most part I take no credit for, and I can't find the site I found it on), takes a schema name, table name, and Y or N [to include or exclude headers] as input parameters and queries the supplied table, outputting each row in comma-separated, quoted, csv format.
I've made numerous fixes/changes to the original script, but the bones of it are from the OP, whoever that was.
Here is the script:
IF OBJECT_ID('get_csvFormat', 'P') IS NOT NULL
DROP PROCEDURE get_csvFormat
GO
CREATE PROCEDURE get_csvFormat(@schemaname VARCHAR(20), @tablename VARCHAR(30),@header char(1))
AS
BEGIN
IF ISNULL(@tablename, '') = ''
BEGIN
PRINT('NO TABLE NAME SUPPLIED, UNABLE TO CONTINUE')
RETURN
END
ELSE
BEGIN
DECLARE @cols VARCHAR(MAX), @sqlstrs VARCHAR(MAX), @heading VARCHAR(MAX), @schemaid int
--if no schemaname provided, default to dbo
IF ISNULL(@schemaname, '') = ''
SELECT @schemaname = 'dbo'
--if no header provided, default to Y
IF ISNULL(@header, '') = ''
SELECT @header = 'Y'
SELECT @schemaid = (SELECT schema_id FROM sys.schemas WHERE [name] = @schemaname)
SELECT
@cols = (
SELECT ' , CAST([', b.name + '] AS VARCHAR(50)) '
FROM sys.objects a
INNER JOIN sys.columns b ON a.object_id=b.object_id
WHERE a.name = @tablename AND a.schema_id = @schemaid
FOR XML PATH('')
),
@heading = (
SELECT ',"' + b.name + '"' FROM sys.objects a
INNER JOIN sys.columns b ON a.object_id=b.object_id
WHERE a.name= @tablename AND a.schema_id = @schemaid
FOR XML PATH('')
)
SET @tablename = @schemaname + '.' + @tablename
SET @heading = 'SELECT ''' + right(@heading,len(@heading)-1) + ''' AS CSV, 0 AS Sort' + CHAR(13)
SET @cols = '''"'',' + replace(right(@cols,len(@cols)-1),',', ',''","'',') + ',''"''' + CHAR(13)
IF @header = 'Y'
SET @sqlstrs = 'SELECT CSV FROM (' + CHAR(13) + @heading + ' UNION SELECT CONCAT(' + @cols + ') CSV, 1 AS Sort FROM ' + @tablename + CHAR(13) + ') X ORDER BY Sort, CSV ASC'
ELSE
SET @sqlstrs = 'SELECT CONCAT(' + @cols + ') CSV FROM ' + @tablename
IF @schemaid IS NOT NULL
EXEC(@sqlstrs)
ELSE
PRINT('SCHEMA DOES NOT EXIST')
END
END
GO
--------------------------------------
--EXEC get_csvFormat @schemaname='dbo', @tablename='TradeUnion', @header='Y'
This is actually a pretty common problem for newcomers to Python—especially since, across the standard library and popular third-party libraries, some reading functions strip out newlines, but almost no writing functions (except the log
-related stuff) add them.
So, there's a lot of Python code out there that does things like:
fw.write('\n'.join(line_list) + '\n')
or
fw.write(line + '\n' for line in line_list)
Either one is correct, and of course you could even write your own writelinesWithNewlines function that wraps it up…
But you should only do this if you can't avoid it.
It's better if you can create/keep the newlines in the first place—as in Greg Hewgill's suggestions:
line_list.append(new_line + "\n")
And it's even better if you can work at a higher level than raw lines of text, e.g., by using the csv module in the standard library, as esuaro suggests.
For example, right after defining fw
, you might do this:
cw = csv.writer(fw, delimiter='|')
Then, instead of this:
new_line = d[looking_for]+'|'+'|'.join(columns[1:])
line_list.append(new_line)
You do this:
row_list.append(d[looking_for] + columns[1:])
And at the end, instead of this:
fw.writelines(line_list)
You do this:
cw.writerows(row_list)
Finally, your design is "open a file, then build up a list of lines to add to the file, then write them all at once". If you're going to open the file up top, why not just write the lines one by one? Whether you're using simple writes or a csv.writer
, it'll make your life simpler, and your code easier to read. (Sometimes there can be simplicity, efficiency, or correctness reasons to write a file all at once—but once you've moved the open
all the way to the opposite end of the program from the write
, you've pretty much lost any benefits of all-at-once.)
Add a backward slash in FIELDS TERMINATED BY '\;'
For Example:
CREATE TABLE demo_table_1_csv
COMMENT 'my_csv_table 1'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\;'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION 'your_hdfs_path'
AS
select a.tran_uuid,a.cust_id,a.risk_flag,a.lookback_start_date,a.lookback_end_date,b.scn_name,b.alerted_risk_category,
CASE WHEN (b.activity_id is not null ) THEN 1 ELSE 0 END as Alert_Flag
FROM scn1_rcc1_agg as a LEFT OUTER JOIN scenario_activity_alert as b ON a.tran_uuid = b.activity_id;
I have tested it, and it worked.
Walkthrough on using MySQL's LOAD DATA command:
Create your table:
CREATE TABLE foo(myid INT, mymessage VARCHAR(255), mydecimal DECIMAL(8,4));
Create your tab delimited file (note there are tabs between the columns):
1 Heart disease kills 1.2
2 one out of every two 2.3
3 people in America. 4.5
Use the load data command:
LOAD DATA LOCAL INFILE '/tmp/foo.txt'
INTO TABLE foo COLUMNS TERMINATED BY '\t';
If you get a warning that this command can't be run, then you have to enable the --local-infile=1
parameter described here: How can I correct MySQL Load Error
The rows get inserted:
Query OK, 3 rows affected (0.00 sec)
Records: 3 Deleted: 0 Skipped: 0 Warnings: 0
Check if it worked:
mysql> select * from foo;
+------+----------------------+-----------+
| myid | mymessage | mydecimal |
+------+----------------------+-----------+
| 1 | Heart disease kills | 1.2000 |
| 2 | one out of every two | 2.3000 |
| 3 | people in America. | 4.5000 |
+------+----------------------+-----------+
3 rows in set (0.00 sec)
How to specify which columns to load your text file columns into:
Like this:
LOAD DATA LOCAL INFILE '/tmp/foo.txt' INTO TABLE foo
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'
(@col1,@col2,@col3) set myid=@col1,mydecimal=@col3;
The file contents get put into variables @col1, @col2, @col3. myid gets column 1, and mydecimal gets column 3. If this were run, it would omit the second row:
mysql> select * from foo;
+------+-----------+-----------+
| myid | mymessage | mydecimal |
+------+-----------+-----------+
| 1 | NULL | 1.2000 |
| 2 | NULL | 2.3000 |
| 3 | NULL | 4.5000 |
+------+-----------+-----------+
3 rows in set (0.00 sec)
Use this method:
$_product = wc_get_product( $id );
Official API-docs: wc_get_product
We eventually found the answer to this.
Excel will only respect the escaping of commas and speech marks if the column value is NOT preceded by a space. So generating the file without spaces like this...
Reference,Title,Description
1,"My little title","My description, which may contain ""speech marks"" and commas."
2,"My other little title","My other description, which may also contain ""speech marks"" and commas."
... fixed the problem. Hope this helps someone!
If someone is facing issue using texttocolumns function in UFT. Please try using below function.
myxl.Workbooks.Open myexcel.xls
myxl.Application.Visible = false `enter code here`
set mysheet = myxl.ActiveWorkbook.Worksheets(1)
Set objRange = myxl.Range("A1").EntireColumn
Set objRange2 = mysheet.Range("A1")
objRange.TextToColumns objRange2,1,1, , , , true
Here we are using coma(,) as delimiter.
you could also use the builtin "join" filter (http://jinja.pocoo.org/docs/templates/#join like this:
{{ users|join(', ') }}
I would use a for
loop to iterate over the lines in the text file:
for line in my_text:
outputfile.writelines(data_parser(line, reps))
If you want to read the file line-by-line instead of loading the whole thing at the start of the script you could do something like this:
inputfile = open('test.dat')
outputfile = open('test.csv', 'w')
# sample text string, just for demonstration to let you know how the data looks like
# my_text = '"2012-06-23 03:09:13.23",4323584,-1.911224,-0.4657288,-0.1166382,-0.24823,0.256485,"NAN",-0.3489428,-0.130449,-0.2440527,-0.2942413,0.04944348,0.4337797,-1.105218,-1.201882,-0.5962594,-0.586636'
# dictionary definition 0-, 1- etc. are there to parse the date block delimited with dashes, and make sure the negative numbers are not effected
reps = {'"NAN"':'NAN', '"':'', '0-':'0,','1-':'1,','2-':'2,','3-':'3,','4-':'4,','5-':'5,','6-':'6,','7-':'7,','8-':'8,','9-':'9,', ' ':',', ':':',' }
for i in range(4): inputfile.next() # skip first four lines
for line in inputfile:
outputfile.writelines(data_parser(line, reps))
inputfile.close()
outputfile.close()
I found that using PARSENAME as above caused any name with a period to get nulled.
So if there was an initial or a title in the name followed by a dot they return NULL.
I found this worked for me:
SELECT
REPLACE(SUBSTRING(FullName, 1,CHARINDEX(',', FullName)), ',','') as Name,
REPLACE(SUBSTRING(FullName, CHARINDEX(',', FullName), LEN(FullName)), ',', '') as Surname
FROM Table1
Sometimes all the solutions with Workbooks.open is not working no matter how many parameters are set. For me, the fastest solution was to change the List separator in Region & language settings. Region window / Additional settings... / List separator.
If csv is not opening in proper way You probly have set ',' as a list separator. Just change it to ';' and everything is solved. Just the easiest way when "everything is against You" :P
Either use:
List<string> list = new List<string>(array);
or from LINQ:
List<string> list = array.ToList();
Or change your code to not rely on the specific implementation:
IList<string> list = array; // string[] implements IList<string>
We had this error on Oracle RAC 11g on Windows, and the solution was to create the same OS directory tree and external file on both nodes.
It works for me.
import smtplib
from email.mime.text import MIMEText
s = smtplib.SMTP('smtp.uk.xensource.com')
s.set_debuglevel(1)
msg = MIMEText("""body""")
sender = '[email protected]'
recipients = '[email protected],[email protected]'
msg['Subject'] = "subject line"
msg['From'] = sender
msg['To'] = recipients
s.sendmail(sender, recipients.split(','), msg.as_string())
That's because you have the Date in American format in line[i]
and UK format in the FormatString
.
11/20/2011
M / d/yyyy
I'm guessing you might need to change the FormatString to:
"M/d/yyyy h:mm"
states = "Alaska Alabama Arkansas American Samoa Arizona California Colorado"
states_list = states.split (' ')
It can be achieved by using rjust
:
line_new = word[0].rjust(10) + word[1].rjust(10) + word[2].rjust(10)
To split a string to an array in awk
we use the function split()
:
awk '{split($0, a, ":")}'
# ^^ ^ ^^^
# | | |
# string | delimiter
# |
# array to store the pieces
If no separator is given, it uses the FS
, which defaults to the space:
$ awk '{split($0, a); print a[2]}' <<< "a:b c:d e"
c:d
We can give a separator, for example :
:
$ awk '{split($0, a, ":"); print a[2]}' <<< "a:b c:d e"
b c
Which is equivalent to setting it through the FS
:
$ awk -F: '{split($0, a); print a[1]}' <<< "a:b c:d e"
b c
In gawk you can also provide the separator as a regexp:
$ awk '{split($0, a, ":*"); print a[2]}' <<< "a:::b c::d e" #note multiple :
b c
And even see what the delimiter was on every step by using its fourth parameter:
$ awk '{split($0, a, ":*", sep); print a[2]; print sep[1]}' <<< "a:::b c::d e"
b c
:::
Let's quote the man page of GNU awk:
split(string, array [, fieldsep [, seps ] ])
Divide string into pieces separated by fieldsep and store the pieces in array and the separator strings in the seps array. The first piece is stored in
array[1]
, the second piece inarray[2]
, and so forth. The string value of the third argument, fieldsep, is a regexp describing where to split string (much as FS can be a regexp describing where to split input records). If fieldsep is omitted, the value of FS is used.split()
returns the number of elements created. seps is agawk
extension, withseps[i]
being the separator string betweenarray[i]
andarray[i+1]
. If fieldsep is a single space, then any leading whitespace goes intoseps[0]
and any trailing whitespace goes intoseps[n]
, where n is the return value ofsplit()
(i.e., the number of elements in array).
Tabulator is a set of unix command line tools to work with csv files that have header lines. Here is an example to extract columns by name from a file test.csv:
name,sex,house_nr,height,shoe_size
arthur,m,42,181,11.5
berta,f,101,163,8.5
chris,m,1333,175,10
don,m,77,185,12.5
elisa,f,204,166,7
Then tblmap -k name,height test.csv
produces
name,height
arthur,181
berta,163
chris,175
don,185
elisa,166
If the file is large, you may not want to load it entirely into memory at once. This approach avoids that. (Of course, making a dict out of it could still take up some RAM, but it's guaranteed to be smaller than the original file.)
my_dict = {}
for i, line in enumerate(file):
if (i - 8) % 7:
continue
k, v = line.split("\t")[:3:2]
my_dict[k] = v
Edit: Not sure where I got extend
from before. I meant update
In the case of integers that are included at the string, if you want to avoid casting them to int
individually you can do:
mList = [int(e) if e.isdigit() else e for e in mStr.split(',')]
It is called list comprehension, and it is based on set builder notation.
ex:
>>> mStr = "1,A,B,3,4"
>>> mList = [int(e) if e.isdigit() else e for e in mStr.split(',')]
>>> mList
>>> [1,'A','B',3,4]
Very simple to fix, you just need to turn the parameter to writerow into a list.
for item in RESULTS:
wr.writerow([item,])
This works, even when there are middle names:
=MID(A2,FIND(CHAR(1),SUBSTITUTE(A2," ",CHAR(1),LEN(A2)-LEN(SUBSTITUTE(A2," ",""))))+1,LEN(A2))
If you want everything BUT the last name, check out this answer.
If there are trailing spaces in your names, then you may want to remove them by replacing all instances of A2
by TRIM(A2)
in the above formula.
Note that it is only by pure chance that your first formula =RIGHT(A2,FIND(" ",A2,1)-1)
kind of works for Alistair Stevens
. This is because "Alistair"
and " Stevens"
happen to contain the same number of characters (if you count the leading space in " Stevens"
).
How about a list?
mystring.split(",")
It might help if you could explain what kind of info we are looking at. Maybe some background info also?
EDIT:
I had a thought you might want the info in groups of two?
then try:
re.split(r"\d*,\d*", mystring)
and also if you want them into tuples
[(pair[0], pair[1]) for match in re.split(r"\d*,\d*", mystring) for pair in match.split(",")]
in a more readable form:
mylist = []
for match in re.split(r"\d*,\d*", mystring):
for pair in match.split(",")
mylist.append((pair[0], pair[1]))
SELECT * FROM TABLE_NAME WHERE
(
LOCATE(',DOG,', CONCAT(',',COLUMN,','))>0 OR
LOCATE(',CAT,', CONCAT(',',COLUMN,','))>0
);
Just to add on to the existing answers: hopefully, you'll encounter something more like this in the future:
>>> word = 'abc'
>>> L = list(word)
>>> L
['a', 'b', 'c']
>>> ''.join(L)
'abc'
But what you're dealing with right now, go with @Cameron's answer.
>>> word = 'a,b,c'
>>> L = word.split(',')
>>> L
['a', 'b', 'c']
>>> ','.join(L)
'a,b,c'
I think I know this one...
Try sending your JSON as JSON by using PHP's header() function:
/**
* Send as JSON
*/
header("Content-Type: application/json", true);
Though you are passing valid JSON, jQuery's $.ajax doesn't think so because it's missing the header.
jQuery used to be fine without the header, but it was changed a few versions back.
ALSO
Be sure that your script is returning valid JSON. Use Firebug or Google Chrome's Developer Tools to check the request's response in the console.
UPDATE
You will also want to update your code to sanitize the $_POST to avoid sql injection attacks. As well as provide some error catching.
if (isset($_POST['get_member'])) {
$member_id = mysql_real_escape_string ($_POST["get_member"]);
$query = "SELECT * FROM `members` WHERE `id` = '" . $member_id . "';";
if ($result = mysql_query( $query )) {
$row = mysql_fetch_array($result);
$type = $row['type'];
$name = $row['name'];
$fname = $row['fname'];
$lname = $row['lname'];
$email = $row['email'];
$phone = $row['phone'];
$website = $row['website'];
$image = $row['image'];
/* JSON Row */
$json = array( "type" => $type, "name" => $name, "fname" => $fname, "lname" => $lname, "email" => $email, "phone" => $phone, "website" => $website, "image" => $image );
} else {
/* Your Query Failed, use mysql_error to report why */
$json = array('error' => 'MySQL Query Error');
}
/* Send as JSON */
header("Content-Type: application/json", true);
/* Return JSON */
echo json_encode($json);
/* Stop Execution */
exit;
}
If the set of colors is more or less fixed, the most efficient and also most readable way would be to use string constants in your app and then use MySQL's SET
type with FIND_IN_SET('red',colors)
in your queries. When using the SET
type with FIND_IN_SET, MySQL uses one integer to store all values and uses binary "and"
operation to check for presence of values which is way more efficient than scanning a comma-separated string.
In SET('red','blue','green')
, 'red'
would be stored internally as 1
, 'blue'
would be stored internally as 2
and 'green'
would be stored internally as 4
. The value 'red,blue'
would be stored as 3
(1|2
) and 'red,green'
as 5
(1|4
).
If you already have a list and want to add values from a delimited string, you can use AddRange
or InsertRange
. For example:
existingList.AddRange(names.Split(','));
I had a similarly strange problem with a file from the program e-prime (edat -> SPSS conversion), but then I discovered that there are many additional encodings you can use. this did the trick for me:
tbl <- read.delim("dir/file.txt", fileEncoding="UCS-2LE")
Try the following:
echo a#b#c | awk -F"#" '{$1 = ""; $NF = ""; print}' OFS=""
The GNU site suggests this nice awk script, which prints both the words and their frequency.
Possible changes:
sort -nr
(and reverse word
and freq[word]
) to see the result in descending order.freq[3]++
- replace 3 with the column number.Here goes:
# wordfreq.awk --- print list of word frequencies
{
$0 = tolower($0) # remove case distinctions
# remove punctuation
gsub(/[^[:alnum:]_[:blank:]]/, "", $0)
for (i = 1; i <= NF; i++)
freq[$i]++
}
END {
for (word in freq)
printf "%s\t%d\n", word, freq[word]
}
I see something strange. You are using for ESCAPING the same character you use for ENCLOSING. So the engine does not know what to do when it founds a '"' and I think that is why nothing seems to be in the right place. I think that if you remove the line of ESCAPING, should run great. Like:
LOAD DATA INFILE "/home/paul/clientdata.csv"
INTO TABLE CSVImport
COLUMNS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES;
Unless you analyze (manually, visually, ... ) your CSV and find which character uses for escape. Sometimes is '\'. But if you do not have it, do not use it.
The other responders are correct that split() takes a regex as the argument, so you'll have to fix that first. The other problem is that you're assuming that the line break characters are the same as the system default. Depending on where the data is coming from, and where the program is running, this assumption may not be correct.
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
#!/bin/bash
awk '
(NR==1){
for(fi=1; fi<=NF; fi++)
fname[fi]=$fi;
}
(NR!=1){
for(fi=1; fi<=NF; fi++)
arr[fname[fi]][$fi]++;
}
END{
for(fi=1; fi<=NF; fi++){
out=fname[fi];
for (item in arr[fname[fi]])
out=out"\t"item"_"arr[fname[fi]][item];
print(out);
}
}
' $1
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
isRef A_15 C_42 G_24 T_18
isCar YEA_10 NO_40 NA_50
isTv FALSE_33 TRUE_66
Yes: simply press and hold the Alt key, click and drag to select the lines whose columns you wish to edit, and begin typing.
You can also go to Settings > Preferences..., and in the Editing tab, turn on multi-editing, to enable selection of multiple separate regions or columns of text to edit at once.
It's much more intuitive, as you can see your edits live as you type.
Use Linq, it is a very quick and easy way.
string mystring = "0, 10, 20, 30, 100, 200";
var query = from val in mystring.Split(',')
select int.Parse(val);
foreach (int num in query)
{
Console.WriteLine(num);
}
I had a look at the above answers and honestly none of them I find satisfactory. What you want to do is essentially mimic the Perl split functionality. Why Java doesn't allow this and have a join() method somewhere is beyond me but I digress. You don't even need a class for this really. Its just a function. Run this sample program:
Some of the earlier answers have excessive null-checking, which I recently wrote a response to a question here:
https://stackoverflow.com/users/18393/cletus
Anyway, the code:
public class Split {
public static List<String> split(String s, String pattern) {
assert s != null;
assert pattern != null;
return split(s, Pattern.compile(pattern));
}
public static List<String> split(String s, Pattern pattern) {
assert s != null;
assert pattern != null;
Matcher m = pattern.matcher(s);
List<String> ret = new ArrayList<String>();
int start = 0;
while (m.find()) {
ret.add(s.substring(start, m.start()));
ret.add(m.group());
start = m.end();
}
ret.add(start >= s.length() ? "" : s.substring(start));
return ret;
}
private static void testSplit(String s, String pattern) {
System.out.printf("Splitting '%s' with pattern '%s'%n", s, pattern);
List<String> tokens = split(s, pattern);
System.out.printf("Found %d matches%n", tokens.size());
int i = 0;
for (String token : tokens) {
System.out.printf(" %d/%d: '%s'%n", ++i, tokens.size(), token);
}
System.out.println();
}
public static void main(String args[]) {
testSplit("abcdefghij", "z"); // "abcdefghij"
testSplit("abcdefghij", "f"); // "abcde", "f", "ghi"
testSplit("abcdefghij", "j"); // "abcdefghi", "j", ""
testSplit("abcdefghij", "a"); // "", "a", "bcdefghij"
testSplit("abcdefghij", "[bdfh]"); // "a", "b", "c", "d", "e", "f", "g", "h", "ij"
}
}
You can use +
if you know all the values are strings. Jinja also provides the ~
operator, which will ensure all values are converted to string first.
{% set my_string = my_string ~ stuff ~ ', '%}
You could also use the following function.
void tokenize(const string& str, vector<string>& tokens, const string& delimiters = ",")
{
// Skip delimiters at beginning.
string::size_type lastPos = str.find_first_not_of(delimiters, 0);
// Find first non-delimiter.
string::size_type pos = str.find_first_of(delimiters, lastPos);
while (string::npos != pos || string::npos != lastPos) {
// Found a token, add it to the vector.
tokens.push_back(str.substr(lastPos, pos - lastPos));
// Skip delimiters.
lastPos = str.find_first_not_of(delimiters, pos);
// Find next non-delimiter.
pos = str.find_first_of(delimiters, lastPos);
}
}
The problem with String.Join is that you are not handling the case of a comma already existing in the value. When a comma exists then you surround the value in Quotes and replace all existing Quotes with double Quotes.
String.Join(",",{"this value has a , in it","This one doesn't", "This one , does"});
See CSV Module
Avoid sqlite3_clear_bindings(stmt)
.
The code in the test sets the bindings every time through which should be enough.
The C API intro from the SQLite docs says:
Prior to calling sqlite3_step() for the first time or immediately after sqlite3_reset(), the application can invoke the sqlite3_bind() interfaces to attach values to the parameters. Each call to sqlite3_bind() overrides prior bindings on the same parameter
There is nothing in the docs for sqlite3_clear_bindings
saying you must call it in addition to simply setting the bindings.
More detail: Avoid_sqlite3_clear_bindings()
You can use yourstring.split("\x09"); I tested it, and it works.
CSV isn't quite that simple. Depending on the limits of the data you have, you might have to worry about quoted values (which may contain commas and newlines) and escaping quotes.
So if your data are restricted enough can get away with simple comma-splitting fine, shell script can do that easily. If, on the other hand, you need to parse CSV ‘properly’, bash would not be my first choice. Instead I'd look at a higher-level scripting language, for example Python with a csv.reader.
i used this for a list of items that had to be alphanumeric without underscores at the front of each item.
^(([0-9a-zA-Z][0-9a-zA-Z_]*)([,][0-9a-zA-Z][0-9a-zA-Z_]*)*)$
Not sure if this is still a problem but I found this simple solution:
Hope this was clear and easy to follow
The Best choice is to use the function "explode()".
$content = "dad,fger,fgferf,fewf";
$delimiters =",";
$explodes = explode($delimiters, $content);
foreach($exploade as $explode) {
echo "This is a exploded String: ". $explode;
}
If you want a faster approach you can use a delimiter tool like Delimiters.co There are many websites like this. But I prefer a simple PHP code.
The C++ String Toolkit Library (StrTk) has a token grid class that allows you to load data either from text files, strings or char buffers, and to parse/process them in a row-column fashion.
You can specify the row delimiters and column delimiters or just use the defaults.
void foo()
{
std::string data = "1,2,3,4,5\n"
"0,2,4,6,8\n"
"1,3,5,7,9\n";
strtk::token_grid grid(data,data.size(),",");
for(std::size_t i = 0; i < grid.row_count(); ++i)
{
strtk::token_grid::row_type r = grid.row(i);
for(std::size_t j = 0; j < r.size(); ++j)
{
std::cout << r.get<int>(j) << "\t";
}
std::cout << std::endl;
}
std::cout << std::endl;
}
More examples can be found Here
You need to put an actual tab character after the -t\ and to do that in a shell you hit ctrl-v and then the tab character. Most shells I've used support this mode of literal tab entry.
Beware, though, because copying and pasting from another place generally does not preserve tabs.
"1,2,3,4".split(",")
as strings
"1,2,3,4".split(",").map { |s| s.to_i }
as integers
I've used the following code, pillaged from someone's blog (pls forgive lack of citation). It takes care of quotations, newline and comma in a reasonably elegant way by quoting out each field value.
/// <summary>
/// Converts the passed in data table to a CSV-style string.
/// </summary>
/// <param name="table">Table to convert</param>
/// <returns>Resulting CSV-style string</returns>
public static string ToCSV(this DataTable table)
{
return ToCSV(table, ",", true);
}
/// <summary>
/// Converts the passed in data table to a CSV-style string.
/// </summary>
/// <param name="table">Table to convert</param>
/// <param name="includeHeader">true - include headers<br/>
/// false - do not include header column</param>
/// <returns>Resulting CSV-style string</returns>
public static string ToCSV(this DataTable table, bool includeHeader)
{
return ToCSV(table, ",", includeHeader);
}
/// <summary>
/// Converts the passed in data table to a CSV-style string.
/// </summary>
/// <param name="table">Table to convert</param>
/// <param name="includeHeader">true - include headers<br/>
/// false - do not include header column</param>
/// <returns>Resulting CSV-style string</returns>
public static string ToCSV(this DataTable table, string delimiter, bool includeHeader)
{
var result = new StringBuilder();
if (includeHeader)
{
foreach (DataColumn column in table.Columns)
{
result.Append(column.ColumnName);
result.Append(delimiter);
}
result.Remove(--result.Length, 0);
result.Append(Environment.NewLine);
}
foreach (DataRow row in table.Rows)
{
foreach (object item in row.ItemArray)
{
if (item is DBNull)
result.Append(delimiter);
else
{
string itemAsString = item.ToString();
// Double up all embedded double quotes
itemAsString = itemAsString.Replace("\"", "\"\"");
// To keep things simple, always delimit with double-quotes
// so we don't have to determine in which cases they're necessary
// and which cases they're not.
itemAsString = "\"" + itemAsString + "\"";
result.Append(itemAsString + delimiter);
}
}
result.Remove(--result.Length, 0);
result.Append(Environment.NewLine);
}
return result.ToString();
}
If you use SQL Server 2008 or higher, use table valued parameters; for example:
CREATE PROCEDURE [dbo].[GetAccounts](@accountIds nvarchar)
AS
BEGIN
SELECT *
FROM accountsTable
WHERE accountId IN (select * from @accountIds)
END
CREATE TYPE intListTableType AS TABLE (n int NOT NULL)
DECLARE @tvp intListTableType
-- inserts each id to one row in the tvp table
INSERT @tvp(n) VALUES (16509),(16685),(46173),(42925),(46167),(5511)
EXEC GetAccounts @tvp
For the sake of completeness, I'd like to add that you cannot reverse String#split in general, as it accepts a regular expression.
"hello__world".split("_+");
Yields ["hello", "world"]
.
"hello_world".split("_+");
Yields ["hello", "world"]
.
These yield identical results from a different starting point. splitting is not a one-to-one operation, and is thus non-reversible.
This all being said, if you assume your parameter to be a fixed string, not regex, then you can certainly do this using one of the many posted answers.
This is another version which really does not have any restrictions (e.g.: special chars when using xml approach, number of records in CTE approach) and it runs much faster based on a test on 10M+ records with source string average length of 4000. Hope this could help.
Create function [dbo].[udf_split] (
@ListString nvarchar(max),
@Delimiter nvarchar(1000),
@IncludeEmpty bit)
Returns @ListTable TABLE (ID int, ListValue nvarchar(1000))
AS
BEGIN
Declare @CurrentPosition int, @NextPosition int, @Item nvarchar(max), @ID int, @L int
Select @ID = 1,
@L = len(replace(@Delimiter,' ','^')),
@ListString = @ListString + @Delimiter,
@CurrentPosition = 1
Select @NextPosition = Charindex(@Delimiter, @ListString, @CurrentPosition)
While @NextPosition > 0 Begin
Set @Item = LTRIM(RTRIM(SUBSTRING(@ListString, @CurrentPosition, @NextPosition-@CurrentPosition)))
If @IncludeEmpty=1 or LEN(@Item)>0 Begin
Insert Into @ListTable (ID, ListValue) Values (@ID, @Item)
Set @ID = @ID+1
End
Set @CurrentPosition = @NextPosition+@L
Set @NextPosition = Charindex(@Delimiter, @ListString, @CurrentPosition)
End
RETURN
END
I'd do it like this:
f = open('test.txt')
l = [l for l in f.readlines() if l.strip()]
f.close()
print l
I've resolved this kind of problem with a regular expression pattern. They tend to be slower than regular queries but it's an easy way to retrieve data in a comma-delimited query column
SELECT *
FROM `TABLE`
WHERE `field` REGEXP ',?[SEARCHED-VALUE],?';
the greedy question mark helps to search at the beggining or the end of the string.
Hope that helps for anyone in the future
The WM_CONCAT
function (if included in your database, pre Oracle 11.2) or LISTAGG
(starting Oracle 11.2) should do the trick nicely. For example, this gets a comma-delimited list of the table names in your schema:
select listagg(table_name, ', ') within group (order by table_name)
from user_tables;
or
select wm_concat(table_name)
from user_tables;
Yes, very weird situation for oracle.
if you specify 2000 ids inside the IN clause, it will fail. this fails:
select ...
where id in (1,2,....2000)
but if you simply put the 2000 ids in another table (temp table for example), it will works below query:
select ...
where id in (select userId
from temptable_with_2000_ids )
what you can do, actually could split the records into a lot of 1000 records and execute them group by group.
I use this function (SQL Server 2005 and above).
create function [dbo].[Split]
(
@string nvarchar(4000),
@delimiter nvarchar(10)
)
returns @table table
(
[Value] nvarchar(4000)
)
begin
declare @nextString nvarchar(4000)
declare @pos int, @nextPos int
set @nextString = ''
set @string = @string + @delimiter
set @pos = charindex(@delimiter, @string)
set @nextPos = 1
while (@pos <> 0)
begin
set @nextString = substring(@string, 1, @pos - 1)
insert into @table
(
[Value]
)
values
(
@nextString
)
set @string = substring(@string, @pos + len(@delimiter), len(@string))
set @nextPos = @pos
set @pos = charindex(@delimiter, @string)
end
return
end
The solutions so far are all quite complicated. The idiomatic solution should doubtless be:
String.Join(",", x.Cast(Of String)().ToArray())
There's no need for fancy acrobatics in new framework versions. Supposing a not-so-modern version, the following would be easiest:
Console.WriteLine(String.Join(",", CType(x.ToArray(GetType(String)), String())))
mspmsp's second solution is a nice approach as well but it's not working because it misses the AddressOf
keyword. Also, Convert.ToString
is rather inefficient (lots of unnecessary internal evaluations) and the Convert
class is generally not very cleanly designed. I tend to avoid it, especially since it's completely redundant.
There's a book in Martin Fowler's Signature Series called Refactoring Databases. That provides a list of techniques for refactoring databases. I can't say I've heard a list of database patterns so much.
I would also highly recommend David C. Hay's Data Model Patterns and the follow up A Metadata Map which builds on the first and is far more ambitious and intriguing. The Preface alone is enlightening.
Also a great place to look for some pre-canned database models is Len Silverston's Data Model Resource Book Series Volume 1 contains universally applicable data models (employees, accounts, shipping, purchases, etc), Volume 2 contains industry specific data models (accounting, healthcare, etc), Volume 3 provides data model patterns.
Finally, while this book is ostensibly about UML and Object Modelling, Peter Coad's Modeling in Color With UML provides an "archetype" driven process of entity modeling starting from the premise that there are 4 core archetypes of any object/data model
Java 8 Native Type
List<Integer> example;
example.add(1);
example.add(2);
example.add(3);
...
example.stream().collect(Collectors.joining(","));
Java 8 Custom Object:
List<Person> person;
...
person.stream().map(Person::getAge).collect(Collectors.joining(","));
A superfast XML Method, if you want to use a stored procedure and pass the comma separated list of Department IDs :
Declare @XMLList xml
SET @XMLList=cast('<i>'+replace(@DepartmentIDs,',','</i><i>')+'</i>' as xml)
SELECT x.i.value('.','varchar(5)') from @XMLList.nodes('i') x(i))
All credit goes to Guru Brad Schulz's Blog
Forgive me if I am off-base here, but isn't this what the Office PIA's are for?
Note that Matt's code will result in an extra comma at the end of the string; using COALESCE (or ISNULL for that matter) as shown in the link in Lance's post uses a similar method but doesn't leave you with an extra comma to remove. For the sake of completeness, here's the relevant code from Lance's link on sqlteam.com:
DECLARE @EmployeeList varchar(100)
SELECT @EmployeeList = COALESCE(@EmployeeList + ', ', '') +
CAST(EmpUniqueID AS varchar(5))
FROM SalesCallsEmployees
WHERE SalCal_UniqueID = 1
install homebrew via terminal
brew install mysql
If you are on Windows, try this:
"C:\Program Files\RStudio\bin\rstudio.exe" http_proxy=http://host:port/
Try this way:
As of Ruby 1.9, uuid generation is built-in. Use the SecureRandom.uuid
function.
Generating Guids in Ruby
This was helpful for me
It's not so much the prime numbers themselves that are important, but the algorithms that work with primes. In particular, finding the factors of a number (any number).
As you know, any number has at least two factors. Prime numbers have the unique property in that they have exactly two factors: 1 and themselves.
The reason factoring is so important is mathematicians and computer scientists don't know how to factor a number without simply trying every possible combination. That is, first try dividing by 2, then by 3, then by 4, and so forth. If you try to factor a prime number--especially a very large one--you'll have to try (essentially) every possible number between 2 and that large prime number. Even on the fastest computers, it will take years (even centuries) to factor the kinds of prime numbers used in cryptography.
It is the fact that we don't know how to efficiently factor a large number that gives cryptographic algorithms their strength. If, one day, someone figures out how to do it, all the cryptographic algorithms we currently use will become obsolete. This remains an open area of research.
If you're on a page and this does not work:
$page_object = get_queried_object();
$page_id = get_queried_object_id();
you can try to build the permalink manually with PHP so you can lookup the post ID:
// get or make permalink
$url = !empty(get_the_permalink()) ? get_the_permalink() : (isset($_SERVER['HTTPS']) ? "https" : "http") . "://$_SERVER[HTTP_HOST]$_SERVER[REQUEST_URI]";
$permalink = strtok($url, '?');
// get post_id using url/permalink
$post_id = url_to_postid($url);
// want the post or postmeta? use get_post() or get_post_meta()
$post = get_post($post_id);
$postmeta = get_post_meta($post_id);
It may not catch every possible permalink (especially since I'm stripping out the query string), but you can modify it to fit your use case.
just try this..
you need to open the connection using connection.open()
on the SqlCommand.Connection
object before executing ExecuteNonQuery()
Using bash
:
echo "|${COMMAND/$'\n'}|"
(Note that the control character in this question is a 'newline' (\n
), not a carriage return (\r
); the latter would have output REBOOT|
on a single line.)
Uses the Bash Shell Parameter Expansion ${parameter/pattern/string}
:
The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. [...] If string is null, matches of pattern are deleted and the / following pattern may be omitted.
Also uses the $''
ANSI-C quoting construct to specify a newline as $'\n'
. Using a newline directly would work as well, though less pretty:
echo "|${COMMAND/
}|"
#!/bin/bash
COMMAND="$'\n'REBOOT"
echo "|${COMMAND/$'\n'}|"
# Outputs |REBOOT|
Or, using newlines:
#!/bin/bash
COMMAND="
REBOOT"
echo "|${COMMAND/
}|"
# Outputs |REBOOT|
You forgot to print
the result. What you get is the P
in RE(P)L
and not the actual printed result.
In Py2.x you should so something like
>>> print "\n".join(['I', 'would', 'expect', 'multiple', 'lines'])
I
would
expect
multiple
lines
and in Py3.X, print is a function, so you should do
print("\n".join(['I', 'would', 'expect', 'multiple', 'lines']))
Now that was the short answer. Your Python Interpreter, which is actually a REPL, always displays the representation of the string rather than the actual displayed output. Representation is what you would get with the repr
statement
>>> print repr("\n".join(['I', 'would', 'expect', 'multiple', 'lines']))
'I\nwould\nexpect\nmultiple\nlines'
class A{
public void methodA(){
new B().methodB();
//or
B.methodB1();
}
}
class B{
//instance method
public void methodB(){
}
//static method
public static void methodB1(){
}
}
try this
<a id="link" href="www.gmail.com" target="_blank" >gmail</a>
App::cpanminus
from CPAN (use: cpan App::cpanminus
for this).cpanm --uninstall Module::Name
(note the "m
") to uninstall the module with cpanminus.This should work.
I believe python arrays just admit values. So convert it to list:
kOUT = np.zeros(N+1)
kOUT = kOUT.tolist()
Content roots are folders holding your project code while source roots are defined as same too. The only difference i came to understand was that the code in source roots is built before the code in the content root.
Unchecking them wouldn't affect the runtime till the point you're not making separate modules in your package which are manually connected to Django. That means if any of your files do not hold the 'from django import...' or any of the function isn't called via django, unchecking these 2 options will result in a malfunction.
Update - the problem only arises when using Virtual Environmanet, and only when controlling the project via the provided terminal. Cause the terminal still works via the default system pyhtonpath and not the virtual env. while the python django control panel works fine.
It turns out the answer was ridiculously simple, but mystifying as to why it was necessary.
In the IIS Manager on the server, I set the application pool for my web application to not allow 32-bit assemblies.
It seems it assumes, on a 64-bit system, that you must want the 32 bit assembly. Bizarre.
One way of doing that is by using trap DEBUG
:
f() { bash -c "$BASH_COMMAND" >& /tmp/out.log; }
trap 'f' DEBUG
Now most recently executed command's stdout and stderr will be available in /tmp/out.log
Only downside is that it will execute a command twice: once to redirect output and error to /tmp/out.log
and once normally. Probably there is some way to prevent this behavior as well.
Could use IgnoreAttribute on the property which needs to be ignored
Not sure if it will do for Greek, but I had the same issue for Brazilian Portuguese characters and my solution was to use html entities. I had basically two cases:
For these, I first encoded it to html entities with htmlentities()
and then decoded them to iso-8859-1
. Example:
$s = html_entity_decode(htmlentities($my_variable_text), ENT_COMPAT | ENT_HTML401, 'iso-8859-1');
For these, I just left htmlentities()
call out. Example:
$s = html_entity_decode("Treasurer/Trésorier", ENT_COMPAT | ENT_HTML401, 'iso-8859-1');
Then I passed $s
to FPDF, like in this example:
$pdf->Cell(100, 20, $s, 0, 0, 'L');
Note: ENT_COMPAT | ENT_HTML401
is the standard value for parameter #2, as in http://php.net/manual/en/function.html-entity-decode.php
Hope that helps.
In case your static code analyzer (like SonarQube) complaints, but you can not think of another way, rather than sleep, you may try with a hack like:
Awaitility.await().pollDelay(Durations.ONE_SECOND).until(() -> true);
It's conceptually incorrect, but it is the same as Thread.sleep(1000)
.
The best way, of course, is to pass a Callable, with your appropriate condition, rather than true
, which I have.
When an element already has a class name defined, its influence on the element is tied to its position in the string of class names. Later classes override earlier ones, if there is a conflict.
Adding a class to an element ought to move the class name to the sharp end of the list, if it exists already.
document.addClass= function(el, css){
var tem, C= el.className.split(/\s+/), A=[];
while(C.length){
tem= C.shift();
if(tem && tem!= css) A[A.length]= tem;
}
A[A.length]= css;
return el.className= A.join(' ');
}
http://momentjs.com/docs/#/displaying/unix-timestamp/
You get the number of unix seconds, not milliseconds!
You you need to multiply it with 1000 or using valueOf()
and don't forget to use a formatter, since you are using a non ISO 8601 format. And if you forget to pass the formatter, the date will be parsed in the UTC timezone or as an invalid date.
moment("10/15/2014 9:00", "MM/DD/YYYY HH:mm").valueOf()
Set up a user, a host the user is allowed to talk to MySQL by using (e.g. localhost), grant that user adequate permissions to do what they need with the database .. and presto.
The user will need basic CRUD privileges to start, that's sufficient to store data received from a form. The rest of the permissions are self explanatory, i.e. permission to alter tables, etc. Give the user no more, no less power than it needs to do its work.
The Esentutl /y option allows copyng (single) file files with progress bar like this :
the command should look like :
esentutl /y "FILE.EXT" /d "DEST.EXT" /o
The command is available on every windows machine but the y
option is presented in windows vista.
As it works only with single files does not look very useful for a small ones.
Other limitation is that the command cannot overwrite files. Here's a wrapper script that checks the destination and if needed could delete it (help can be seen by passing /h
).
Install mysysgit. (Same as Greg Hewgill's answer.)
Install Tortoisegit. (Tortoisegit requires mysysgit or something similiar like Cygwin.)
After TortoiseGit is installed, right-click on a folder, select Git Clone..., then enter the Url of the repository, then click Ok.
This answer is not any better than just installing mysysgit, but you can avoid the dreaded command line. :)
You could just set the width to auto. Then the width of the div will equal 0 if it has no content.
width:auto;
You can get around the problem without switching to classic mode and still use Application_Start
public class Global : HttpApplication
{
private static HttpRequest initialRequest;
static Global()
{
initialRequest = HttpContext.Current.Request;
}
void Application_Start(object sender, EventArgs e)
{
//access the initial request here
}
For some reason, the static type is created with a request in its HTTPContext, allowing you to store it and reuse it immediately in the Application_Start event
Yes in Notepad++ you can do that!
Some hotkeys regarding comments:
Source: shortcutworld.com from the Comment / uncomment section.
On the link you will find many other useful shortcuts too.
In VIM, take a look at the following to see different alternatives for what you might have done:
:help opening-window
For instance:
Ctrl-W s
Ctrl-W o
Ctrl-W v
Ctrl-W o
Ctrl-W s
...
All will get executed and On first Called first run basis!!
<div id="target"></div>
<script>
$(document).ready(function(){
jQuery('#target').append('target edit 1<br>');
});
$(document).ready(function(){
jQuery('#target').append('target edit 2<br>');
});
$(document).ready(function(){
jQuery('#target').append('target edit 3<br>');
});
</script>
Demo As you can see they do not replace each other
Also one thing i would like to mention
in place of this
$(document).ready(function(){});
you can use this shortcut
jQuery(function(){
//dom ready codes
});
Was looking for the solution of a similar problem. Thanks to the answer provided by Davlet and Cupcake I was able to solve my problem.
Posting this answer here since I think this is the intended question
So I guess generally the problem that people like me face is what to do when a repo is cloned by another user on a server and that user is no longer associated with the repo.
How to pull from the repo without using the credentials of the old user ?
You edit the .git/config file of your repo.
and change
url = https://<old-username>@github.com/abc/repo.git/
to
url = https://<new-username>@github.com/abc/repo.git/
After saving the changes, from now onwards git pull will pull data while using credentials of the new user.
I hope this helps anyone with a similar problem
I found other methods to check if a point is inside a polygon (here). I tested two of them only (is_inside_sm and is_inside_postgis) and the results were the same as the other methods.
Thanks to @epifanio, I parallelized the codes and compared them with @epifanio and @user3274748 (ray_tracing_numpy) methods. Note that both methods had a bug so I fixed them as shown in their codes below.
One more thing that I found is that the code provided for creating a polygon does not generate a closed path np.linspace(0,2*np.pi,lenpoly)[:-1]
. As a result, the codes provided in above GitHub repository may not work properly. So It's better to create a closed path (first and last points should be the same).
Codes
Method 1: parallelpointinpolygon
from numba import jit, njit
import numba
import numpy as np
@jit(nopython=True)
def pointinpolygon(x,y,poly):
n = len(poly)
inside = False
p2x = 0.0
p2y = 0.0
xints = 0.0
p1x,p1y = poly[0]
for i in numba.prange(n+1):
p2x,p2y = poly[i % n]
if y > min(p1y,p2y):
if y <= max(p1y,p2y):
if x <= max(p1x,p2x):
if p1y != p2y:
xints = (y-p1y)*(p2x-p1x)/(p2y-p1y)+p1x
if p1x == p2x or x <= xints:
inside = not inside
p1x,p1y = p2x,p2y
return inside
@njit(parallel=True)
def parallelpointinpolygon(points, polygon):
D = np.empty(len(points), dtype=numba.boolean)
for i in numba.prange(0, len(D)): #<-- Fixed here, must start from zero
D[i] = pointinpolygon(points[i,0], points[i,1], polygon)
return D
Method 2: ray_tracing_numpy_numba
@jit(nopython=True)
def ray_tracing_numpy_numba(points,poly):
x,y = points[:,0], points[:,1]
n = len(poly)
inside = np.zeros(len(x),np.bool_)
p2x = 0.0
p2y = 0.0
p1x,p1y = poly[0]
for i in range(n+1):
p2x,p2y = poly[i % n]
idx = np.nonzero((y > min(p1y,p2y)) & (y <= max(p1y,p2y)) & (x <= max(p1x,p2x)))[0]
if len(idx): # <-- Fixed here. If idx is null skip comparisons below.
if p1y != p2y:
xints = (y[idx]-p1y)*(p2x-p1x)/(p2y-p1y)+p1x
if p1x == p2x:
inside[idx] = ~inside[idx]
else:
idxx = idx[x[idx] <= xints]
inside[idxx] = ~inside[idxx]
p1x,p1y = p2x,p2y
return inside
Method 3: Matplotlib contains_points
path = mpltPath.Path(polygon,closed=True) # <-- Very important to mention that the path
# is closed (default is false)
Method 4: is_inside_sm (got it from here)
@jit(nopython=True)
def is_inside_sm(polygon, point):
length = len(polygon)-1
dy2 = point[1] - polygon[0][1]
intersections = 0
ii = 0
jj = 1
while ii<length:
dy = dy2
dy2 = point[1] - polygon[jj][1]
# consider only lines which are not completely above/bellow/right from the point
if dy*dy2 <= 0.0 and (point[0] >= polygon[ii][0] or point[0] >= polygon[jj][0]):
# non-horizontal line
if dy<0 or dy2<0:
F = dy*(polygon[jj][0] - polygon[ii][0])/(dy-dy2) + polygon[ii][0]
if point[0] > F: # if line is left from the point - the ray moving towards left, will intersect it
intersections += 1
elif point[0] == F: # point on line
return 2
# point on upper peak (dy2=dx2=0) or horizontal line (dy=dy2=0 and dx*dx2<=0)
elif dy2==0 and (point[0]==polygon[jj][0] or (dy==0 and (point[0]-polygon[ii][0])*(point[0]-polygon[jj][0])<=0)):
return 2
ii = jj
jj += 1
#print 'intersections =', intersections
return intersections & 1
@njit(parallel=True)
def is_inside_sm_parallel(points, polygon):
ln = len(points)
D = np.empty(ln, dtype=numba.boolean)
for i in numba.prange(ln):
D[i] = is_inside_sm(polygon,points[i])
return D
Method 5: is_inside_postgis (got it from here)
@jit(nopython=True)
def is_inside_postgis(polygon, point):
length = len(polygon)
intersections = 0
dx2 = point[0] - polygon[0][0]
dy2 = point[1] - polygon[0][1]
ii = 0
jj = 1
while jj<length:
dx = dx2
dy = dy2
dx2 = point[0] - polygon[jj][0]
dy2 = point[1] - polygon[jj][1]
F =(dx-dx2)*dy - dx*(dy-dy2);
if 0.0==F and dx*dx2<=0 and dy*dy2<=0:
return 2;
if (dy>=0 and dy2<0) or (dy2>=0 and dy<0):
if F > 0:
intersections += 1
elif F < 0:
intersections -= 1
ii = jj
jj += 1
#print 'intersections =', intersections
return intersections != 0
@njit(parallel=True)
def is_inside_postgis_parallel(points, polygon):
ln = len(points)
D = np.empty(ln, dtype=numba.boolean)
for i in numba.prange(ln):
D[i] = is_inside_postgis(polygon,points[i])
return D
Timing for 10 million points:
parallelpointinpolygon Elapsed time: 4.0122294425964355
Matplotlib contains_points Elapsed time: 14.117807388305664
ray_tracing_numpy_numba Elapsed time: 7.908452272415161
sm_parallel Elapsed time: 0.7710440158843994
is_inside_postgis_parallel Elapsed time: 2.131121873855591
Here is the code.
import matplotlib.pyplot as plt
import matplotlib.path as mpltPath
from time import time
import numpy as np
np.random.seed(2)
time_parallelpointinpolygon=[]
time_mpltPath=[]
time_ray_tracing_numpy_numba=[]
time_is_inside_sm_parallel=[]
time_is_inside_postgis_parallel=[]
n_points=[]
for i in range(1, 10000002, 1000000):
n_points.append(i)
lenpoly = 100
polygon = [[np.sin(x)+0.5,np.cos(x)+0.5] for x in np.linspace(0,2*np.pi,lenpoly)]
polygon = np.array(polygon)
N = i
points = np.random.uniform(-1.5, 1.5, size=(N, 2))
#Method 1
start_time = time()
inside1=parallelpointinpolygon(points, polygon)
time_parallelpointinpolygon.append(time()-start_time)
# Method 2
start_time = time()
path = mpltPath.Path(polygon,closed=True)
inside2 = path.contains_points(points)
time_mpltPath.append(time()-start_time)
# Method 3
start_time = time()
inside3=ray_tracing_numpy_numba(points,polygon)
time_ray_tracing_numpy_numba.append(time()-start_time)
# Method 4
start_time = time()
inside4=is_inside_sm_parallel(points,polygon)
time_is_inside_sm_parallel.append(time()-start_time)
# Method 5
start_time = time()
inside5=is_inside_postgis_parallel(points,polygon)
time_is_inside_postgis_parallel.append(time()-start_time)
plt.plot(n_points,time_parallelpointinpolygon,label='parallelpointinpolygon')
plt.plot(n_points,time_mpltPath,label='mpltPath')
plt.plot(n_points,time_ray_tracing_numpy_numba,label='ray_tracing_numpy_numba')
plt.plot(n_points,time_is_inside_sm_parallel,label='is_inside_sm_parallel')
plt.plot(n_points,time_is_inside_postgis_parallel,label='is_inside_postgis_parallel')
plt.xlabel("N points")
plt.ylabel("time (sec)")
plt.legend(loc = 'best')
plt.show()
CONCLUSION
The fastest algorithms are:
1- is_inside_sm_parallel
2- is_inside_postgis_parallel
3- parallelpointinpolygon (@epifanio)
There's one difference — which shouldn't matter — that the other answers haven't touched on, so:
There's no difference that's likely to matter, no. Yes, there is a very small difference.
If the fulfillment handler passed to then
throws, the promise returned by that call to then
is rejected with what was thrown.
If it returns a rejected promise, the promise returned by the call to then
is resolved to that promise (and will ultimately be rejected, since the promise it's resolved to is rejected), which may introduce one extra async "tick" (one more loop in the microtask queue, to put it in browser terms).
Any code that relies on that difference is fundamentally broken, though. :-) It shouldn't be that sensitive to the timing of the promise settlement.
Here's an example:
function usingThrow(val) {
return Promise.resolve(val)
.then(v => {
if (v !== 42) {
throw new Error(`${v} is not 42!`);
}
return v;
});
}
function usingReject(val) {
return Promise.resolve(val)
.then(v => {
if (v !== 42) {
return Promise.reject(new Error(`${v} is not 42!`));
}
return v;
});
}
// The rejection handler on this chain may be called **after** the
// rejection handler on the following chain
usingReject(1)
.then(v => console.log(v))
.catch(e => console.error("Error from usingReject:", e.message));
// The rejection handler on this chain may be called **before** the
// rejection handler on the preceding chain
usingThrow(2)
.then(v => console.log(v))
.catch(e => console.error("Error from usingThrow:", e.message));
_x000D_
If you run that, as of this writing you get:
Error from usingThrow: 2 is not 42! Error from usingReject: 1 is not 42!
Note the order.
Compare that to the same chains but both using usingThrow
:
function usingThrow(val) {
return Promise.resolve(val)
.then(v => {
if (v !== 42) {
throw new Error(`${v} is not 42!`);
}
return v;
});
}
usingThrow(1)
.then(v => console.log(v))
.catch(e => console.error("Error from usingThrow:", e.message));
usingThrow(2)
.then(v => console.log(v))
.catch(e => console.error("Error from usingThrow:", e.message));
_x000D_
which shows that the rejection handlers ran in the other order:
Error from usingThrow: 1 is not 42! Error from usingThrow: 2 is not 42!
I said "may" above because there's been some work in other areas that removed this unnecessary extra tick in other similar situations if all of the promises involved are native promises (not just thenables). (Specifically: In an async
function, return await x
originally introduced an extra async tick vs. return x
while being otherwise identical; ES2020 changed it so that if x
is a native promise, the extra tick is removed.)
Again, any code that's that sensitive to the timing of the settlement of a promise is already broken. So really it doesn't/shouldn't matter.
In practical terms, as other answers have mentioned:
throw
won't work if you're in a callback to some other function you've used within your fulfillment handler — this is the biggiethrow
abruptly terminates the function, which can be useful (but you're using return
in your example, which does the same thing)throw
in a conditional expression (? :
), at least not for nowOther than that, it's mostly a matter of style/preference, so as with most of those, agree with your team what you'll do (or that you don't care either way), and be consistent.
We should rather use Javascript.
<button href="images/car.jpg" id="myButton">
Here is the Button to be clicked
</button>
<script>
$(document).ready(function(){
document.getElementById("myButton").click();
});
</script>
For what it's worth, the source code for Stream.Close
explains why there are two methods:
// Stream used to require that all cleanup logic went into Close(), // which was thought up before we invented IDisposable. However, we // need to follow the IDisposable pattern so that users can write // sensible subclasses without needing to inspect all their base // classes, and without worrying about version brittleness, from a // base class switching to the Dispose pattern. We're moving // Stream to the Dispose(bool) pattern - that's where all subclasses // should put their cleanup now.
In short, Close
is only there because it predates Dispose
, and it can't be deleted for compatibility reasons.
create a key using
keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000
then sign the apk using :
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore my_application.apk alias_name
If anyone still cares on how to do this without JavaScript, let me complete Josh answer:
How to display the text of the filename:
The easiest way is to set both elements to a position:relative, give the label a higher z-index and give the input file negative margin until the label text is where you want it to be. Do not use display:none on the input!
input[type="file"] {
position:relative;
z-index:1;
margin-left:-90px;
}
.custom-file-upload {
border: 1px solid #ccc;
display: inline-block;
padding: 6px 12px;
cursor: pointer;
position:relative;
z-index:2;
background:white;
}
The @BeforeClass
and @AfterClass
annotated methods will be run exactly once during your test run - at the very beginning and end of the test as a whole, before anything else is run. In fact, they're run before the test class is even constructed, which is why they must be declared static
.
The @Before
and @After
methods will be run before and after every test case, so will probably be run multiple times during a test run.
So let's assume you had three tests in your class, the order of method calls would be:
setUpBeforeClass()
(Test class first instance constructed and the following methods called on it)
setUp()
test1()
tearDown()
(Test class second instance constructed and the following methods called on it)
setUp()
test2()
tearDown()
(Test class third instance constructed and the following methods called on it)
setUp()
test3()
tearDown()
tearDownAfterClass()
Checking your linked site, you may include a script tag passing a ?var=desiredVarName
parameter which will be set as a global variable containing the IP address:
<script type="text/javascript" src="http://l2.io/ip.js?var=myip"></script>
<!-- ^^^^ -->
<script>alert(myip);</script>
I believe I don't have to say that this can be easily spoofed (through either use of proxies or spoofed request headers), but it is worth noting in any case.
In case your page is served using the https
protocol, most browsers will block content in the same page served using the http
protocol (that includes scripts and images), so the options are rather limited. If you have < 5k hits/day, the Smart IP API can be used. For instance:
<script>
var myip;
function ip_callback(o) {
myip = o.host;
}
</script>
<script src="https://smart-ip.net/geoip-json?callback=ip_callback"></script>
<script>alert(myip);</script>
Edit: Apparently, this https
service's certificate has expired so the user would have to add an exception manually. Open its API directly to check the certificate state: https://smart-ip.net/geoip-json
The most resilient and simple way, in case you have back-end server logic, would be to simply output the requester's IP inside a <script>
tag, this way you don't need to rely on external resources. For example:
PHP:
<script>var myip = '<?php echo $_SERVER['REMOTE_ADDR']; ?>';</script>
There's also a more sturdy PHP solution (accounting for headers that are sometimes set by proxies) in this related answer.
C#:
<script>var myip = '<%= Request.UserHostAddress %>';</script>
As for the inverse, the function is inv(A)
, but I won't recommend using it, since for huge matrices it is very computationally costly and unstable. Instead, you should use an approximation to the inverse, or if you want to solve Ax = b you don't really need A-1.
Whenever you want to redirect, send the headers:
header("Location: http://www.example.com/");
Remember you cant send data to the client before that, though.
I was having this problem and added this JavaScript to the bottom of my page (read it at https://www.webtrickshome.com/faq/how-to-stop-form-resubmission-on-page-refresh) and it seemed to work. It seems much simpler a solution. Any drawbacks?
<script>
if ( window.history.replaceState ) {
window.history.replaceState( null, null, window.location.href );
}
</script>
Thanks,
doug
A DataFrame
is defined well with a google search for "DataFrame definition":
A data frame is a table, or two-dimensional array-like structure, in which each column contains measurements on one variable, and each row contains one case.
So, a DataFrame
has additional metadata due to its tabular format, which allows Spark to run certain optimizations on the finalized query.
An RDD
, on the other hand, is merely a Resilient Distributed Dataset that is more of a blackbox of data that cannot be optimized as the operations that can be performed against it, are not as constrained.
However, you can go from a DataFrame to an RDD
via its rdd
method, and you can go from an RDD
to a DataFrame
(if the RDD is in a tabular format) via the toDF
method
In general it is recommended to use a DataFrame
where possible due to the built in query optimization.
You can use a FileOutputStream for this.
FileOutputStream fos = null;
try {
fos = new FileOutputStream(new File("myFile"));
ByteArrayOutputStream baos = new ByteArrayOutputStream();
// Put data in your baos
baos.writeTo(fos);
} catch(IOException ioe) {
// Handle exception here
ioe.printStackTrace();
} finally {
fos.close();
}
If you want to use jQuery, the .load() function is the correct function you are after;
But you are missing the #
from the div1 id selector in the example 2)
This should work:
$("#div1").load("file2.html");
add following code in function.php
add_filter( 'query_vars', 'addnew_query_vars', 10, 1 );
function addnew_query_vars($vars)
{
$vars[] = 'var1'; // var1 is the name of variable you want to add
return $vars;
}
then you will b able to use $_GET['var1']
Another simple way:
Dir.mkdir('tmp/excel') unless Dir.exist?('tmp/excel')
This is a good way of counting entries within .pivot_table
:
df2.pivot_table(values='X', index=['Y','Z'], columns='X', aggfunc='count')
X1 X2
Y Z
Y1 Z1 1 1
Z2 1 NaN
Y2 Z3 1 NaN
You can do it with 1 printf
:
System.out.printf("First Name: %s\nLast Name: %s",firstname, lastname);
You should use options.addAll(allPoints);
instead of options.add(point);
Since you mentioned that you're working on a NFS system, do you have access to those semaphores and shared memory? I think you misunderstood what they are, they are an API code that enables processes to communicate with each other, semaphores are a solution for preventing race conditions and for threads to communicate with each other, in simple answer, they do not leave any residue on any filesystem.
Unless you are using an socket or a pipe? Do you have the necessary permissions to remove them, why are they on an NFS system?
Hope this helps, Best regards, Tom.
The ODP.Net provider from oracle uses bind by position as default. To change the behavior to bind by name. Set property BindByName to true. Than you can dismiss the double definition of parameters.
using(OracleCommand cmd = con.CreateCommand()) {
...
cmd.BindByName = true;
...
}
I try in this example to show you a complete example to personalize your OrderBy sorts
import java.util.List;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Sort;
import org.springframework.data.jpa.repository.*;
import org.springframework.data.repository.query.Param;
import org.springframework.stereotype.Repository;
import org.springframework.data.domain.Sort;
/**
* Spring Data repository for the User entity.
*/
@SuppressWarnings("unused")
@Repository
public interface UserRepository extends JpaRepository<User, Long> {
List <User> findAllWithCustomOrderBy(Sort sort);
}
you will use this example : A method for build dynamically a object that instance of Sort :
import org.springframework.data.domain.Sort;
public class SampleOrderBySpring{
Sort dynamicOrderBySort = createSort();
public static void main( String[] args )
{
System.out.println("default sort \"firstName\",\"name\",\"age\",\"size\" ");
Sort defaultSort = createStaticSort();
System.out.println(userRepository.findAllWithCustomOrderBy(defaultSort ));
String[] orderBySortedArray = {"name", "firstName"};
System.out.println("default sort ,\"name\",\"firstName\" ");
Sort dynamicSort = createDynamicSort(orderBySortedArray );
System.out.println(userRepository.findAllWithCustomOrderBy(dynamicSort ));
}
public Sort createDynamicSort(String[] arrayOrdre) {
return Sort.by(arrayOrdre);
}
public Sort createStaticSort() {
String[] arrayOrdre ={"firstName","name","age","size");
return Sort.by(arrayOrdre);
}
}
To get the numbers after decimal point, you can do this:-
read num1 num2
div=`echo $num1 / $num2 | bc -l`
echo $div
You're doing things in the wrong order.
You need to first add all JComponents to the JFrame, and only then call pack()
and then setVisible(true)
on the JFrame
If you later added JComponents that could change the GUI's size you will need to call pack()
again, and then repaint()
on the JFrame after doing so.
cd ~ && apt-get source coreutils && ls -d coreutils*
You should be able to use a command like this on ubuntu to gather the source for a package, you can omit sudo
assuming your downloading to a location you own.
-- html file --
<select name='city[]'>
<option name='Kabul' value="Kabul" > Kabul </option>
<option name='Herat' value='Herat' selected="selected"> Herat </option>
<option name='Mazar' value='Mazar'>Mazar </option>
</select>
-- php file --
$city = (isset($_POST['city']) ? $_POST['city']: null);
print("city is: ".$city[0]);
You answered the question by yourself: "I know that absolute positioned elements are removed from the flow, thus ignored by other elements." So you can't set the parents height according to an absolutely positioned element.
You either use fixed heights or you need to involve JS.
There's a short overview at MinGW-w64 Wiki:
Why doesn't mingw-w64 gcc support Dwarf-2 Exception Handling?
The Dwarf-2 EH implementation for Windows is not designed at all to work under 64-bit Windows applications. In win32 mode, the exception unwind handler cannot propagate through non-dw2 aware code, this means that any exception going through any non-dw2 aware "foreign frames" code will fail, including Windows system DLLs and DLLs built with Visual Studio. Dwarf-2 unwinding code in gcc inspects the x86 unwinding assembly and is unable to proceed without other dwarf-2 unwind information.
The SetJump LongJump method of exception handling works for most cases on both win32 and win64, except for general protection faults. Structured exception handling support in gcc is being developed to overcome the weaknesses of dw2 and sjlj. On win64, the unwind-information are placed in xdata-section and there is the .pdata (function descriptor table) instead of the stack. For win32, the chain of handlers are on stack and need to be saved/restored by real executed code.
GCC GNU about Exception Handling:
GCC supports two methods for exception handling (EH):
- DWARF-2 (DW2) EH, which requires the use of DWARF-2 (or DWARF-3) debugging information. DW-2 EH can cause executables to be slightly bloated because large call stack unwinding tables have to be included in th executables.
- A method based on setjmp/longjmp (SJLJ). SJLJ-based EH is much slower than DW2 EH (penalising even normal execution when no exceptions are thrown), but can work across code that has not been compiled with GCC or that does not have call-stack unwinding information.
[...]
Structured Exception Handling (SEH)
Windows uses its own exception handling mechanism known as Structured Exception Handling (SEH). [...] Unfortunately, GCC does not support SEH yet. [...]
See also:
Use the timeit module. It's very easy. Run your example.py file so it is active in the Python Shell, you should now be able to call your function in the shell. Try it out to check it works
>>>fun(input)
output
Good, that works, now import timeit and set up a timer
>>>import timeit
>>>t = timeit.Timer('example.fun(input)','import example')
>>>
Now we have our timer set up we can see how long it takes
>>>t.timeit(number=1)
some number here
And there we go, it will tell you how many seconds (or less) it took to execute that function. If it's a simple function then you can increase it to t.timeit(number=1000) (or any number!) and then divide the answer by the number to get the average.
I hope this helps.
This is the most portable, least intrusive way:
select
count(*)
from
INFORMATION_SCHEMA.VIEWS
where
table_name = 'MyView'
and table_schema = 'MySchema'
Edit: This does work on SQL Server, and it doesn't require you joining to sys.schemas
to get the schema of the view. This is less important if everything is dbo
, but if you're making good use of schemas, then you should keep that in mind.
Each RDBMS has their own little way of checking metadata like this, but information_schema
is actually ANSI, and I think Oracle and apparently SQLite are the only ones that don't support it in some fashion.
Step1: git rebase -i HEAD~n
to do interactive rebase for the last n
commits affected. (i.e. if you want to change a commit message 3 commits back, do git rebase -i HEAD~3
)
git will pop up an editor to handle those commits, notice this command:
# r, reword = use commit, but edit the commit message
that is exactly we need!
Step2: Change pick
to r
for those commits that you want to update the message. Don't bother changing the commit message here, it will be ignored. You'll do that on the next step. Save and close the editor.
Note that if you edit your rebase 'plan' yet it doesn't begin the process of letting you rename the files, run:
git rebase --continue
If you want to change the text editor used for the interactive session (e.g. from the default vi to nano), run:
GIT_EDITOR=nano git rebase -i HEAD~n
Step3: Git will pop up another editor for every revision you put r
before. Update the commit msg as you like, then save and close the editor.
Step4: After all commits msgs are updated. you might want to do git push -f
to update the remote.
Thread interruption is based on flag interrupt status. For every thread default value of interrupt status is set to false. Whenever interrupt() method is called on thread, interrupt status is set to true.
Your class JSON_result
does not match your JSON string. Note how the object JSON_result
is going to represent is wrapped in another property named "Venue"
.
So either create a class for that, e.g.:
Public Class Container
Public Venue As JSON_result
End Class
Public Class JSON_result
Public ID As Integer
Public Name As String
Public NameWithTown As String
Public NameWithDestination As String
Public ListingType As String
End Class
Dim obj = JsonConvert.DeserializeObject(Of Container)(...your_json...)
or change your JSON string to
{
"ID": 3145,
"Name": "Big Venue, Clapton",
"NameWithTown": "Big Venue, Clapton, London",
"NameWithDestination": "Big Venue, Clapton, London",
"ListingType": "A",
"Address": {
"Address1": "Clapton Raod",
"Address2": "",
"Town": "Clapton",
"County": "Greater London",
"Postcode": "PO1 1ST",
"Country": "United Kingdom",
"Region": "Europe"
},
"ResponseStatus": {
"ErrorCode": "200",
"Message": "OK"
}
}
or use e.g. a ContractResolver
to parse the JSON string.
You could use the {display: inline-flex;} this would produce this: inline-flex
The easiest way is to use the ``
feature in Perl. This will execute what is inside and return what was printed to stdout:
my $pid = 5892;
my $var = `top -H -p $pid -n 1 | grep myprocess | wc -l`;
print "not = $var\n";
This should do it.
In the simplest form, the following code works in Angular 6/7
this.http.post("http://destinationurl.com/endpoint", fileFormData)
.subscribe(response => {
//handle response
}, err => {
//handle error
});
Here is the complete implementation
In the C standard, a standalone implementation doesn't have to provide all of the library functions that a hosted implementation has to provide. The C standard doesn't care about embedded, but vendors of embedded systems usually provide standalone implementations with whatever amount of libraries they're willing to provide.
C is a widely used general purpose high level programming language mainly intended for system programming.
Embedded C is an extension to C programming language that provides support for developing efficient programs for embedded devices.It is not a part of the C language
You can also refer to the articles below:
you can use data-tag in html5 and do this using this code:
<script>_x000D_
$('#mainCat').on('change', function() {_x000D_
var selected = $(this).val();_x000D_
$("#expertCat option").each(function(item){_x000D_
console.log(selected) ; _x000D_
var element = $(this) ; _x000D_
console.log(element.data("tag")) ; _x000D_
if (element.data("tag") != selected){_x000D_
element.hide() ; _x000D_
}else{_x000D_
element.show();_x000D_
}_x000D_
}) ; _x000D_
_x000D_
$("#expertCat").val($("#expertCat option:visible:first").val());_x000D_
_x000D_
});_x000D_
</script>
_x000D_
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>_x000D_
<select id="mainCat">_x000D_
<option value = '1'>navid</option>_x000D_
<option value = '2'>javad</option>_x000D_
<option value = '3'>mamal</option>_x000D_
</select>_x000D_
_x000D_
<select id="expertCat">_x000D_
<option value = '1' data-tag='2'>UI</option>_x000D_
<option value = '2' data-tag='2'>Java Android</option>_x000D_
<option value = '3' data-tag='1'>Web</option>_x000D_
<option value = '3' data-tag='1'>Server</option>_x000D_
<option value = '3' data-tag='3'>Back End</option>_x000D_
<option value = '3' data-tag='3'>.net</option>_x000D_
</select>
_x000D_
The se argument from the example also isn't in the help or online documentation.
When 'se' in geom_smooth is set 'FALSE', the error shading region is not visible
You can also remove them by adding code to your global.asax file:
protected void Application_PreSendRequestHeaders(object sender, EventArgs e)
{
HttpContext.Current.Response.Headers.Remove("X-Powered-By");
HttpContext.Current.Response.Headers.Remove("X-AspNet-Version");
HttpContext.Current.Response.Headers.Remove("X-AspNetMvc-Version");
HttpContext.Current.Response.Headers.Remove("Server");
}
If you are a user of my ForEach DSL, it can be done with a Detect
query.
Foo foo = ...
Detect<Foo> query = Detect.from(list);
for (Detect<Foo> each: query)
each.yield = each.element.a == foo.a && each.element.b == foo.b;
return query.result();
+universal
One thing to note: in order for that to make a difference you need to have built python
with +universal
, if you haven't or you're not sure you can just rebuild python +universal
. This applies to both brew as well as macports.
$ brew reinstall python
$ brew install boost
OR
$ sudo port -f uninstall python
$ sudo port install python +universal
$ sudo port install boost +universal
Yes, you'll need to use pseudo elements AND pseudo selectors: http://jsfiddle.net/cYky9/
Yes I use gocr at http://jocr.sourceforge.net/ its a commandline application which you could execute from your application. I use it in a couple of my applications.
Use your subquery to return the correct grouping, because you're halfway there.
Try this:
select
a.*
from
messages a
inner join
(select name, max(id) as maxid from messages group by name) as b on
a.id = b.maxid
If it's not id
you want the max of:
select
a.*
from
messages a
inner join
(select name, max(other_col) as other_col
from messages group by name) as b on
a.name = b.name
and a.other_col = b.other_col
This way, you avoid correlated subqueries and/or ordering in your subqueries, which tend to be very slow/inefficient.
I know that I'm not directly answering the question, but I think it's a good approach. Nobody has mentioned tmux (or at least not as a standalone answer). Tmux is a terminal multiplexor like screen. Most stuff can be made in both multiplexors, but afaik tmux it's more easily to configure. Also tmux right now is being more actively developed than screen and there's quite a big ecosystem around it, like tools that help the configuration, ecc.
Also for vim, there's another plugin: ViMUX, that helps a lot in the interaction between both tools. You can call commands with:
:call VimuxRunCommand("ls")
That command creates a small horizontal split below the current pane vim is in.
It can also let you run from a prompt in case you don't want to run the whole command:
<Leader>vp :VimuxPromptCommand<CR>
As it weren't enought, there are at least 6 'platform specific plugins':
g:vroom_use_vimux
Here is a nice "use case": Tests on demand using Vimux and Turbux with Spork and Guard
If you ever need to do this dynamically, you can use the following:
boolean isInstance(Object object, Class<?> type) {
return type.isInstance(object);
}
You can get an instance of java.lang.Class
by calling the instance method Object::getClass
on any object (returns the Class
which that object is an instance of), or you can do class literals (for example, String.class
, List.class
, int[].class
). There are other ways as well, through the reflection API (which Class
itself is the entry point for).
I know this is a rather old thread, but for future readers here is another solution I found quite intuitive:
public class NumberTextField extends TextField
{
@Override
public void replaceText(int start, int end, String text)
{
if (validate(text))
{
super.replaceText(start, end, text);
}
}
@Override
public void replaceSelection(String text)
{
if (validate(text))
{
super.replaceSelection(text);
}
}
private boolean validate(String text)
{
return text.matches("[0-9]*");
}
}
Edit: Thanks none_ and SCBoy for your suggested improvements.
LINUX
struct timeval tv;
tv.tv_sec = 30; // 30 Secs Timeout
tv.tv_usec = 0; // Not init'ing this can cause strange errors
setsockopt(sockfd, SOL_SOCKET, SO_RCVTIMEO, (const char*)&tv,sizeof(struct timeval));
WINDOWS
DWORD timeout = SOCKET_READ_TIMEOUT_SEC * 1000;
setsockopt(socket, SOL_SOCKET, SO_RCVTIMEO, (const char*)&timeout, sizeof(timeout));
NOTE: You have put this setting before bind()
function call for proper run
I am using org.springframework.core.ResolvableType for a ListResultEntity :
ResolvableType resolvableType = ResolvableType.forClassWithGenerics(ListResultEntity.class, itemClass);
ParameterizedTypeReference<ListResultEntity<T>> typeRef = ParameterizedTypeReference.forType(resolvableType.getType());
So in your case:
public <T> ResponseWrapper<T> makeRequest(URI uri, Class<T> clazz) {
ResponseEntity<ResponseWrapper<T>> response = template.exchange(
uri,
HttpMethod.POST,
null,
ParameterizedTypeReference.forType(ResolvableType.forClassWithGenerics(ResponseWrapper.class, clazz)));
return response;
}
This only makes use of spring and of course requires some knowledge about the returned types (but should even work for things like Wrapper>> as long as you provide the classes as varargs )
"y" is a string/array/pointer. 'y' is a char/integral type
Unfortunately, the MinGW-w64 installer you used sometimes has this issue. I myself am not sure about why this happens (I think it has something to do with Sourceforge URL redirection or whatever that the installer currently can't handle properly enough).
Anyways, if you're already planning on using MSYS2, there's no need for that installer.
Download MSYS2 from this page (choose 32 or 64-bit according to what version of Windows you are going to use it on, not what kind of executables you want to build, both versions can build both 32 and 64-bit binaries).
After the install completes, click on the newly created "MSYS2 Shell" option under either MSYS2 64-bit
or MSYS2 32-bit
in the Start menu. Update MSYS2 according to the wiki (although I just do a pacman -Syu
, ignore all errors and close the window and open a new one, this is not recommended and you should do what the wiki page says).
Install a toolchain
a) for 32-bit:
pacman -S mingw-w64-i686-gcc
b) for 64-bit:
pacman -S mingw-w64-x86_64-gcc
install any libraries/tools you may need. You can search the repositories by doing
pacman -Ss name_of_something_i_want_to_install
e.g.
pacman -Ss gsl
and install using
pacman -S package_name_of_something_i_want_to_install
e.g.
pacman -S mingw-w64-x86_64-gsl
and from then on the GSL library is automatically found by your MinGW-w64 64-bit compiler!
Open a MinGW-w64 shell:
a) To build 32-bit things, open the "MinGW-w64 32-bit Shell"
b) To build 64-bit things, open the "MinGW-w64 64-bit Shell"
Verify that the compiler is working by doing
gcc -v
If you want to use the toolchains (with installed libraries) outside of the MSYS2 environment, all you need to do is add <MSYS2 root>/mingw32/bin
or <MSYS2 root>/mingw64/bin
to your PATH
.
Update for Debian/Ubuntu
Google Mock (package: google-mock
) and Google Test (package: libgtest-dev
) have been merged. The new package is called googletest
. Both old names are still available for backwards compatibility and now depend on the new package googletest
.
So, to get your libraries from the package repository, you can do the following:
sudo apt-get install googletest -y
cd /usr/src/googletest
sudo mkdir build
cd build
sudo cmake ..
sudo make
sudo cp googlemock/*.a googlemock/gtest/*.a /usr/lib
After that, you can link against -lgmock
(or against -lgmock_main
if you do not use a custom main method) and -lpthread
. This was sufficient for using Google Test in my cases at least.
If you want the most current version of Google Test, download it from github. After that, the steps are similar:
git clone https://github.com/google/googletest
cd googletest
sudo mkdir build
cd build
sudo cmake ..
sudo make
sudo cp lib/*.a /usr/lib
As you can see, the path where the libraries are created has changed. Keep in mind that the new path might be valid for the package repositories soon, too.
Instead of copying the libraries manually, you could use sudo make install
. It "currently" works, but be aware that it did not always work in the past. Also, you don't have control over the target location when using this command and you might not want to pollute /usr/lib
.
No, there isn't, but you could whip one up:
jQuery.fn.swapWith = function(to) {
return this.each(function() {
var copy_to = $(to).clone(true);
var copy_from = $(this).clone(true);
$(to).replaceWith(copy_from);
$(this).replaceWith(copy_to);
});
};
Usage:
$(selector1).swapWith(selector2);
Note this only works if the selectors only match 1 element each, otherwise it could give weird results.
plt.imshow
displays the image on the axes, but if you need to display multiple images you use show()
to finish the figure. The next example shows two figures:
import numpy as np
from keras.datasets import mnist
(X_train,y_train),(X_test,y_test) = mnist.load_data()
from matplotlib import pyplot as plt
plt.imshow(X_train[0])
plt.show()
plt.imshow(X_train[1])
plt.show()
In Google Colab, if you comment out the show()
method from previous example just a single image will display (the later one connected with X_train[1]
).
Here is the content from the help:
plt.show(*args, **kw)
Display a figure.
When running in ipython with its pylab mode, display all
figures and return to the ipython prompt.
In non-interactive mode, display all figures and block until
the figures have been closed; in interactive mode it has no
effect unless figures were created prior to a change from
non-interactive to interactive mode (not recommended). In
that case it displays the figures but does not block.
A single experimental keyword argument, *block*, may be
set to True or False to override the blocking behavior
described above.
plt.imshow(X, cmap=None, norm=None, aspect=None, interpolation=None, alpha=None, vmin=None, vmax=None, origin=None, extent=None, shape=None, filternorm=1, filterrad=4.0, imlim=None, resample=None, url=None, hold=None, data=None, **kwargs)
Display an image on the axes.
Parameters
----------
X : array_like, shape (n, m) or (n, m, 3) or (n, m, 4)
Display the image in `X` to current axes. `X` may be an
array or a PIL image. If `X` is an array, it
can have the following shapes and types:
- MxN -- values to be mapped (float or int)
- MxNx3 -- RGB (float or uint8)
- MxNx4 -- RGBA (float or uint8)
The value for each component of MxNx3 and MxNx4 float arrays
should be in the range 0.0 to 1.0. MxN arrays are mapped
to colors based on the `norm` (mapping scalar to scalar)
and the `cmap` (mapping the normed scalar to a color).
As you use Joda Time, you should use DateTimeFormatter
:
final DateTimeFormatter dtf = DateTimeFormatter.ofPattern("yyyy-MMM-dd");
final LocalDate dt = dtf.parseLocalDate(yourinput);
If using Java 8 or later, then refer to hertzi's answer
You can use display: flex
and height: 100vh
html, body {_x000D_
height: 100%;_x000D_
margin: 0px;_x000D_
}_x000D_
body {_x000D_
display: flex;_x000D_
}_x000D_
_x000D_
.left, .right {_x000D_
flex: 1;_x000D_
}_x000D_
_x000D_
.left {_x000D_
background: orange;_x000D_
}_x000D_
_x000D_
.right {_x000D_
background: cyan;_x000D_
}
_x000D_
<div class="left">left</div>_x000D_
<div class="right">right</div>
_x000D_
var result = _context.Employees
.Where(x => !x.IsDeleted)
.Where(x => x.ClientId > (clientId > 0 ? clientId - 1 : -1))
.Where(x => x.ClientId < (clientId > 0 ? clientId + 1 : 1000))
.Where(x => x.ContractorFlag == employeeFlag);
return result;
If clientId = 0 we want ALL employees,. but for any clientId between 1 and 999 we want only clients with that ID. I was having issues with seperate LINQ statements not being the same (Deleted/Clients filters need to be on all queries), so by add these two lines it works (all be it until we have 999+ clients - which would be a happy re-factor day!!
You're using the ajax function incorrectly. Since it's synchronous it'll return the data inline like so:
var remote = $.ajax({
type: "GET",
url: remote_url,
async: false
}).responseText;
Inside the filter inject this JavaScript which will bring the login page like this. If you don't do this then in your AJAX call you will get login page and the contents of login page will be appended.
Inside your filter or redirect insert this script in response:
String scr = "<script>window.location=\""+request.getContextPath()+"/login.do\"</script>";
response.getWriter().write(scr);
while true ; do
...
if [ something ]; then
break
fi
done
python_test.py
import subprocess
a = subprocess.check_output("batch_1.bat")
print a
This gives output from batch file to be print on the python IDLE/running console. So in batch file you can echo the result in each step to debug the issue. This is also useful in automation when there is an error happening in the batch call, to understand and locate the error easily.(put "echo off" in batch file beginning to avoid printing everything)
batch_1.bat
echo off
echo "Hello World"
md newdir
echo "made new directory"
Trust store generally (actually should only contain root CAs but this rule is violated in general) contains the certificates that of the root CAs (public CAs or private CAs). You can verify the list of certs in trust store using
keytool -list -v -keystore truststore.jks
I had the same issue with Python3.
My code was writing into io.BytesIO()
.
Replacing with io.StringIO()
solved.
If the types of the parameters are all the same (varchar2
for example), you can have a package like this which will do the following:
CREATE OR REPLACE PACKAGE testuser.test_pkg IS
TYPE assoc_array_varchar2_t IS TABLE OF VARCHAR2(4000) INDEX BY BINARY_INTEGER;
PROCEDURE your_proc(p_parm IN assoc_array_varchar2_t);
END test_pkg;
CREATE OR REPLACE PACKAGE BODY testuser.test_pkg IS
PROCEDURE your_proc(p_parm IN assoc_array_varchar2_t) AS
BEGIN
FOR i IN p_parm.first .. p_parm.last
LOOP
dbms_output.put_line(p_parm(i));
END LOOP;
END;
END test_pkg;
Then, to call it you'd need to set up the array and pass it:
DECLARE
l_array testuser.test_pkg.assoc_array_varchar2_t;
BEGIN
l_array(0) := 'hello';
l_array(1) := 'there';
testuser.test_pkg.your_proc(l_array);
END;
/
This is a summary of the main solutions with easy-to-reproduce code:
var data = {foo: 'blah "!"', bar: 123};
var xhr = new XMLHttpRequest();
xhr.open("POST", "test.php");
xhr.onreadystatechange = function() { if (xhr.readyState === 4 && xhr.status === 200) { console.log(xhr.responseText); } }
xhr.setRequestHeader("Content-type", "application/json") // or "text/plain"
xhr.send(JSON.stringify(data));
PHP side, you can get the data with:
print_r(json_decode(file_get_contents('php://input'), true));
var data = {foo: 'blah "!"', bar: 123};
var xhr = new XMLHttpRequest();
xhr.open("POST", "test.php");
xhr.onreadystatechange = function() { if (xhr.readyState === 4 && xhr.status === 200) { console.log(xhr.responseText); } }
xhr.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
xhr.send("json=" + encodeURIComponent(JSON.stringify(data)));
Note: encodeURIComponent(...)
is needed for example if the JSON contains &
character.
PHP side, you can get the data with:
print_r(json_decode($_POST['json'], true));
var data = {foo: 'blah "!"', bar: 123};
var xhr = new XMLHttpRequest();
xhr.open("POST", "test.php");
xhr.onreadystatechange = function() { if (xhr.readyState === 4 && xhr.status === 200) { console.log(xhr.responseText); } }
xhr.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
xhr.send(new URLSearchParams(data).toString());
PHP side, you can get the data with:
print_r($_POST);
Just use the event-handler. Something like this should work:
$('.js-my-element').bind('contextmenu', function(e) {
e.preventDefault();
alert('The eventhandler will make sure, that the contextmenu dosn't appear.');
});
Applying inline-block
to the element that is to be centered and applying text-align:center
to the parent block did the trick for me.
Works even on <span>
tags.
Most of the time this compile error comes when unicode(UTF-8 encoded) file compiling
javac -encoding UTF-8 HelloWorld.java
and also You can add this compile option to your IDE
ex: Intellij idea
(File>settings>Java Compiler) add as additional command line parameter
-encoding : encoding Set the source file encoding name, such as EUC-JP and UTF-8.. If -encoding is not specified, the platform default converter is used. (DOC)
This works as well:
<dependency>
<scope>compile</scope>
<groupId>javax.servlet.jsp.jstl</groupId>
<artifactId>jstl-api</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
so used jstl 1.2 instead of standard.jar together with jstl-api 1.2
How does spring know which polymorphic type to use.
As long as there is only a single implementation of the interface and that implementation is annotated with @Component
with Spring's component scan enabled, Spring framework can find out the (interface, implementation) pair. If component scan is not enabled, then you have to define the bean explicitly in your application-config.xml (or equivalent spring configuration file).
Do I need @Qualifier or @Resource?
Once you have more than one implementation, then you need to qualify each of them and during auto-wiring, you would need to use the @Qualifier
annotation to inject the right implementation, along with @Autowired
annotation. If you are using @Resource (J2EE semantics), then you should specify the bean name using the name
attribute of this annotation.
Why do we autowire the interface and not the implemented class?
Firstly, it is always a good practice to code to interfaces in general. Secondly, in case of spring, you can inject any implementation at runtime. A typical use case is to inject mock implementation during testing stage.
interface IA
{
public void someFunction();
}
class B implements IA
{
public void someFunction()
{
//busy code block
}
public void someBfunc()
{
//doing b things
}
}
class C implements IA
{
public void someFunction()
{
//busy code block
}
public void someCfunc()
{
//doing C things
}
}
class MyRunner
{
@Autowire
@Qualifier("b")
IA worker;
....
worker.someFunction();
}
Your bean configuration should look like this:
<bean id="b" class="B" />
<bean id="c" class="C" />
<bean id="runner" class="MyRunner" />
Alternatively, if you enabled component scan on the package where these are present, then you should qualify each class with @Component
as follows:
interface IA
{
public void someFunction();
}
@Component(value="b")
class B implements IA
{
public void someFunction()
{
//busy code block
}
public void someBfunc()
{
//doing b things
}
}
@Component(value="c")
class C implements IA
{
public void someFunction()
{
//busy code block
}
public void someCfunc()
{
//doing C things
}
}
@Component
class MyRunner
{
@Autowire
@Qualifier("b")
IA worker;
....
worker.someFunction();
}
Then worker
in MyRunner
will be injected with an instance of type B
.
Try to see if SQL snap-ins are present:
get-pssnapin -Registered
Name : SqlServerCmdletSnapin100
PSVersion : 2.0
Description : This is a PowerShell snap-in that includes various SQL Server cmdlets.
Name : SqlServerProviderSnapin100
PSVersion : 2.0
Description : SQL Server Provider
If so
Add-PSSnapin SqlServerCmdletSnapin100 # here lives Invoke-SqlCmd
Add-PSSnapin SqlServerProviderSnapin100
then you can do something like this:
invoke-sqlcmd -inputfile "c:\mysqlfile.sql" -serverinstance "servername\serverinstance" -database "mydatabase" # the parameter -database can be omitted based on what your sql script does.
Based on @AmNaN suggestion (more details):
class ApplicationController < ActionController::Base
def current_controller?(names)
names.include?(params[:controller]) unless params[:controller].blank? || false
end
helper_method :current_controller?
end
Now you can call it e.g. in a navigation layout for marking list items as active:
<ul class="nav nav-tabs">
<li role="presentation" class="<%= current_controller?('items') ? 'active' : '' %>">
<%= link_to user_items_path(current_user) do %>
<i class="fa fa-cloud-upload"></i>
<% end %>
</li>
<li role="presentation" class="<%= current_controller?('users') ? 'active' : '' %>">
<%= link_to users_path do %>
<i class="fa fa-newspaper-o"></i>
<% end %>
</li>
<li role="presentation" class="<%= current_controller?('alerts') ? 'active' : '' %>">
<%= link_to alerts_path do %>
<i class="fa fa-bell-o"></i>
<% end %>
</li>
</ul>
For the users
and alerts
routes, current_page?
would be enough:
current_page?(users_path)
current_page?(alerts_path)
But with nested routes and request for all actions of a controller (comparable with items
), current_controller?
was the better method for me:
resources :users do
resources :items
end
The first menu entry is that way active for the following routes:
/users/x/items #index
/users/x/items/x #show
/users/x/items/new #new
/users/x/items/x/edit #edit
Try below ;
select if(prereq IS NULL ," ",prereq ) from test
I'd print out the result of an2.getNodeName()
as well for debugging purposes. My guess is that your tree crawling code isn't crawling to the nodes that you think it is. That suspicion is enhanced by the lack of checking for node names in your code.
Other than that, the javadoc for Node defines "getNodeValue()" to return null for Nodes of type Element. Therefore, you really should be using getTextContent(). I'm not sure why that wouldn't give you the text that you want.
Perhaps iterate the children of your tag node and see what types are there?
Tried this code and it works for me:
String xml = "<add job=\"351\">\n" +
" <tag>foobar</tag>\n" +
" <tag>foobar2</tag>\n" +
"</add>";
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
ByteArrayInputStream bis = new ByteArrayInputStream(xml.getBytes());
Document doc = db.parse(bis);
Node n = doc.getFirstChild();
NodeList nl = n.getChildNodes();
Node an,an2;
for (int i=0; i < nl.getLength(); i++) {
an = nl.item(i);
if(an.getNodeType()==Node.ELEMENT_NODE) {
NodeList nl2 = an.getChildNodes();
for(int i2=0; i2<nl2.getLength(); i2++) {
an2 = nl2.item(i2);
// DEBUG PRINTS
System.out.println(an2.getNodeName() + ": type (" + an2.getNodeType() + "):");
if(an2.hasChildNodes()) System.out.println(an2.getFirstChild().getTextContent());
if(an2.hasChildNodes()) System.out.println(an2.getFirstChild().getNodeValue());
System.out.println(an2.getTextContent());
System.out.println(an2.getNodeValue());
}
}
}
Output was:
#text: type (3): foobar foobar
#text: type (3): foobar2 foobar2
I do it like this, even though it looks like a hack it works every time:
ls_of_things = ['apple', 'car', 'truck', 'bike', 'banana']
first = 0
last = len(ls_of_things)
for items in ls_of_things:
if first == 0
first = first + 1
pass
elif first == last - 1:
break
else:
do_stuff
first = first + 1
pass
As I needed to check if there are items from a list in a (long) string, I ended up with this one:
listOfStrings.Any(x => myString.ToUpper().Contains(x.ToUpper()));
Or in vb.net:
listOfStrings.Any(Function(x) myString.ToUpper().Contains(x.ToUpper()))
You can create a new List and add the elements to it:
For example:
List<A> a = getListOfA();
List<Object> newList = new ArrayList<>();
newList.addAll(a);
Shouldn't you have:
DELETE FROM tableA WHERE entitynum IN (...your select...)
Now you just have a WHERE with no comparison:
DELETE FROM tableA WHERE (...your select...)
So your final query would look like this;
DELETE FROM tableA WHERE entitynum IN (
SELECT tableA.entitynum FROM tableA q
INNER JOIN tableB u on (u.qlabel = q.entityrole AND u.fieldnum = q.fieldnum)
WHERE (LENGTH(q.memotext) NOT IN (8,9,10) OR q.memotext NOT LIKE '%/%/%')
AND (u.FldFormat = 'Date')
)
There are probably some commands to resolve it, but I would start by looking in your .git/config
file for references to that branch, and removing them.
new String(myobj)
If you want to serialize the whole object to string, use JSON.
As mentioned in Django docs, when get
method finds no entry or finds multiple entries, it raises an exception, this is the expected behavior:
get() raises MultipleObjectsReturned if more than one object was found. The MultipleObjectsReturned exception is an attribute of the model class.
get() raises a DoesNotExist exception if an object wasn’t found for the given parameters. This exception is an attribute of the model class.
Using exceptions is a way to handle this problem, but I actually don't like the ugly try-except
block. An alternative solution, and cleaner to me, is to use the combination of filter
+ first
.
user = UniversityDetails.objects.filter(email=email).first()
When you do .first()
to an empty queryset it returns None
. This way you can have the same effect in a single line.
The only difference between catching the exception and using this method occurs when you have multiple entries, the former will raise an exception while the latter will set the first element, but as you are using get
I may assume we won't fall on this situation.
Note that first
method was added on Django 1.6.
I'm not sure about the syntax of your specific commands (e.g., vagrant, etc), but in general...
Just register Ansible's (not-normally-shown) JSON output to a variable, then display each variable's stdout_lines
attribute:
- name: Generate SSH keys for vagrant user
user: name=vagrant generate_ssh_key=yes ssh_key_bits=2048
register: vagrant
- debug: var=vagrant.stdout_lines
- name: Show SSH public key
command: /bin/cat $home_directory/.ssh/id_rsa.pub
register: cat
- debug: var=cat.stdout_lines
- name: Wait for user to copy SSH public key
pause: prompt="Please add the SSH public key above to your GitHub account"
register: pause
- debug: var=pause.stdout_lines
The approach you're looking for is FillDown
. Another way so you don't have to kick your head off every time is to store formulas in an array of strings. Combining them gives you a powerful method of inputting formulas by the multitude. Code follows:
Sub FillDown()
Dim strFormulas(1 To 3) As Variant
With ThisWorkbook.Sheets("Sheet1")
strFormulas(1) = "=SUM(A2:B2)"
strFormulas(2) = "=PRODUCT(A2:B2)"
strFormulas(3) = "=A2/B2"
.Range("C2:E2").Formula = strFormulas
.Range("C2:E11").FillDown
End With
End Sub
Screenshots:
Result as of line: .Range("C2:E2").Formula = strFormulas
:
Result as of line: .Range("C2:E11").FillDown
:
Of course, you can make it dynamic by storing the last row into a variable and turning it to something like .Range("C2:E" & LRow).FillDown
, much like what you did.
Hope this helps!
$('select').on('change', function (e) {
var optionSelected = $("option:selected", this);
var valueSelected = this.value;
....
});
Then re-start the dfs and the yarn as follows.
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver
Hope this works fine.
Here is an example to convert high res image into thumbnail size-
protected void Button1_Click(object sender, EventArgs e)
{
//---------- Getting the Image File
System.Drawing.Image img = System.Drawing.Image.FromFile(Server.MapPath("~/profile/Avatar.jpg"));
//---------- Getting Size of Original Image
double imgHeight = img.Size.Height;
double imgWidth = img.Size.Width;
//---------- Getting Decreased Size
double x = imgWidth / 200;
int newWidth = Convert.ToInt32(imgWidth / x);
int newHeight = Convert.ToInt32(imgHeight / x);
//---------- Creating Small Image
System.Drawing.Image.GetThumbnailImageAbort myCallback = new System.Drawing.Image.GetThumbnailImageAbort(ThumbnailCallback);
System.Drawing.Image myThumbnail = img.GetThumbnailImage(newWidth, newHeight, myCallback, IntPtr.Zero);
//---------- Saving Image
myThumbnail.Save(Server.MapPath("~/profile/NewImage.jpg"));
}
public bool ThumbnailCallback()
{
return false;
}
Source- http://iknowledgeboy.blogspot.in/2014/03/c-creating-thumbnail-of-large-image-by.html
oldDate
is not in the format of the SimpleDateFormat
you are using to parse it.
Try this format: dd-MMM-yyyy
- It matches what you're trying to parse.
The runtime or execution environment is the part of a language implementation which executes code and is present at run-time; the compile-time part of the implementation is called the translation environment in the C standard.
Examples:
the Java runtime consists of the virtual machine and the standard library
a common C runtime consists of the loader (which is part of the operating system) and the runtime library, which implements the parts of the C language which are not built into the executable by the compiler; in hosted environments, this includes most parts of the standard library
specify the main class location in pom under plugins
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.4</version>
<configuration>
<archive>
<index>true</index>
<manifest>
<mainClass>com.example.hadoop.wordCount.WordCountApp</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</build>
How are you generating your data?
See how the output shows that your data is of 'object' type? the groupby operations specifically check whether each column is a numeric dtype first.
In [31]: data
Out[31]:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 2557 entries, 2004-01-01 00:00:00 to 2010-12-31 00:00:00
Freq: <1 DateOffset>
Columns: 360 entries, -89.75 to 89.75
dtypes: object(360)
look ?
Did you initialize an empty DataFrame first and then filled it? If so that's probably why it changed with the new version as before 0.9 empty DataFrames were initialized to float type but now they are of object type. If so you can change the initialization to DataFrame(dtype=float)
.
You can also call frame.astype(float)
There are many ways to compare files from two different branches:
Option 1: If you want to compare the file from n specific branch to another specific branch:
git diff branch1name branch2name path/to/file
Example:
git diff mybranch/myfile.cs mysecondbranch/myfile.cs
In this example you are comparing the file in “mybranch” branch to the file in the “mysecondbranch” branch.
Option 2: Simple way:
git diff branch1:file branch2:file
Example:
git diff mybranch:myfile.cs mysecondbranch:myfile.cs
This example is similar to the option 1.
Option 3: If you want to compare your current working directory to some branch:
git diff ..someBranch path/to/file
Example:
git diff ..master myfile.cs
In this example you are comparing the file from your actual branch to the file in the master branch.
If you only want to upgrade one specific package called somepackage
, the command you should use in recent versions of pip is
pip install --upgrade --upgrade-strategy only-if-needed somepackage
This is very useful when you develop an application in Django that currently will only work with a specific version of Django (say Django=1.9.x) and want to upgrade some dependent package with a bug-fix/new feature and the upgraded package depends on Django (but it works with, say, any version of Django after 1.5).
The default behavior of pip install --upgrade django-some-package
would be to upgrade Django to the latest version available which could otherwise break your application, though with the --upgrade-strategy only-if-needed
dependent packages will now only be upgraded as necessary.
Let's keep it simple!
function copy_properties($from, $to, $fields = null) {
// copies properties/elements (overwrites duplicates)
// can take arrays or objects
// if fields is set (an array), will only copy keys listed in that array
// returns $to with the added/replaced properties/keys
$from_array = is_array($from) ? $from : get_object_vars($from);
foreach($from_array as $key => $val) {
if(!is_array($fields) or in_array($key, $fields)) {
if(is_object($to)) {
$to->$key = $val;
} else {
$to[$key] = $val;
}
}
}
return($to);
}
If that doesn't answer your question, it will surely help towards the answer. Credit for the code above goes to myself :)
<div class="foo">Foo Bar</div>
and in your CSS file:
.foo {
background-image: url("images/foo.png");
}
A functional approach:
>>> import operator
>>> a = [1,3,5,7,11,13,17,21]
>>> map(operator.sub, a[1:], a[:-1])
[2, 2, 2, 4, 2, 4, 4]
Using generator:
>>> import operator, itertools
>>> g1,g2 = itertools.tee((x*x for x in xrange(5)),2)
>>> list(itertools.imap(operator.sub, itertools.islice(g1,1,None), g2))
[1, 3, 5, 7]
Using indices:
>>> [a[i+1]-a[i] for i in xrange(len(a)-1)]
[2, 2, 2, 4, 2, 4, 4]
Value cannot be null. Parameter name: source
Above error comes in situation when you are querying the collection which is null.
For demonstration below code will result in such an exception.
Console.WriteLine("Hello World");
IEnumerable<int> list = null;
list.Where(d => d ==4).FirstOrDefault();
Here is the output of the above code.
Hello World Run-time exception (line 11): Value cannot be null. Parameter name: source
Stack Trace:
[System.ArgumentNullException: Value cannot be null. Parameter name: source] at Program.Main(): line 11
In your case ListMetadataKor
is null.
Here is the fiddle if you want to play around.
In case you happen to use additional stuff like dependsOnMethods
, you may want to define the entire @Test flow in your testng.xml file. AFAIK, the order defined in your suite XML file (testng.xml) will override all other ordering strategies.
Note, function form.reset()
will not work if some input tag in the form have attribute name='reset'
Using print_r
, var_dump
or var_export
should do it pretty nicely if you look at the result in view-source mode not in HTML mode or as @Joel Larson said if you wrap everything in a <pre>
tag.
print_r
is best for readability but it doesn't print null/false values.
var_dump
is best for checking types of values and lengths and null/false values.
var_export
is simmilar to var_dump
but it can be used to get the dumped string.
The format returned by any of these is indented correctly in the source code and var_export
can be used for logging since it can be used to return the dumped string.
Use the xdebug plug-in for PHP this prints var_dump
s as HTML formatted strings not as raw dump format and also allows you to supply a custom function you want to use for formatting.
6.5.5 Multiplicative operators
...
6 When integers are divided, the result of the/
operator is the algebraic quotient with any fractional part discarded.105) If the quotienta/b
is representable, the expression(a/b)*b + a%b
shall equala
; otherwise, the behavior of botha/b
anda%b
is unde?ned.
105) This is often called ‘‘truncation toward zero’’.
Dividing an integer by an integer gives an integer result. 1/2 yields 0; assigning this result to a floating-point variable gives 0.0. To get a floating-point result, at least one of the operands must be a floating-point type. b = a / 350.0f;
should give you the result you want.
$("#registerSubmit").serialize() // returns all the data in your form
$.ajax({
type: "POST",
url: 'your url',
data: $("#registerSubmit").serialize(),
success: function() {
//success message mybe...
}
});
Add the useUnifiedTopology option and set it to true.
Set other 3 configuration of the mongoose.connect options which will deal with other remaining DeprecationWarning.
This configuration works for me!
const url = 'mongodb://localhost:27017/db_name';
mongoose.connect(
url,
{
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true,
useFindAndModify: false
}
)
This will solve 4 DeprecationWarning.
findOneAndUpdate()
and findOneAndDelete()
without the useFindAndModify
option set to false are deprecated. See: https://mongoosejs.com/docs/deprecations.html#-findandmodify-.Hope it helps.
Quickly ran a timeit to see, if there are any differences between the np.full
and np.ones
version.
Answer: No
import timeit
n_array, n_test = 1000, 10000
setup = f"import numpy as np; n = {n_array};"
print(f"np.ones: {timeit.timeit('np.ones((n, n), dtype=bool)', number=n_test, setup=setup)}s")
print(f"np.full: {timeit.timeit('np.full((n, n), True)', number=n_test, setup=setup)}s")
Result:
np.ones: 0.38416870904620737s
np.full: 0.38430388597771525s
IMPORTANT
Regarding the post about np.empty
(and I cannot comment, as my reputation is too low):
DON'T DO THAT. DON'T USE np.empty
to initialize an all-True
array
As the array is empty, the memory is not written and there is no guarantee, what your values will be, e.g.
>>> print(np.empty((4,4), dtype=bool))
[[ True True True True]
[ True True True True]
[ True True True True]
[ True True False False]]
Checkout intent properties like no history , clear back stack etc ... Intent.setFlags
Intent mStartActivity = new Intent(HomeActivity.this, SplashScreen.class);
int mPendingIntentId = 123456;
PendingIntent mPendingIntent = PendingIntent.getActivity(HomeActivity.this, mPendingIntentId, mStartActivity,
PendingIntent.FLAG_CANCEL_CURRENT);
AlarmManager mgr = (AlarmManager) HomeActivity.this.getSystemService(Context.ALARM_SERVICE);
mgr.set(AlarmManager.RTC, System.currentTimeMillis() + 100, mPendingIntent);
System.exit(0);
?+U Undo last cursor operation
You can also try ctrl+-
BTW all the shortcuts is here https://code.visualstudio.com/shortcuts/keyboard-shortcuts-macos.pdf This is really useful!
No need to use a macro. Supposing your first string is in A1.
=RIGHT(A1, 4)
Drag this down and you will get your four last characters.
Edit: To be sure, if you ever have sequences like 'ABC DEF' and want the last four LETTERS and not CHARACTERS you might want to use trimspaces()
=RIGHT(TRIMSPACES(A1), 4)
Edit: As per brettdj's suggestion, you may want to check that your string is actually 4-character long or more:
=IF(TRIMSPACES(A1)>=4, RIGHT(TRIMSPACES(A1), 4), TRIMSPACES(A1))
You can use GlassFish server and the error will be resolved. I tried with tomcat7 and tomcat8 but this error was coming continuously but resolved with GlassFish. I think it's a problem with server.
SUBSTR(column, LENGTH(column) - 3, 4)
LENGTH
returns length of string and SUBSTR
returns 4 characters from "the position length - 4"
Your problem is that log4j has not been initialized. It does not affect the outcome of you application in any way, so it's safe to ignore or just initialize Log4J, see: How to initialize log4j properly?
case isnull(B.[stat],0)
when 0 then dateadd(dd,10,(c.[Eventdate]))
end
you can add in else statement if you want to add 30 days to the same .
Using solve
with a single parameter is a request to invert a matrix. The error message is telling you that your matrix is singular and cannot be inverted.
/path/to/project
Copy projects into workspace
and Add project to working sets
To remove the Server:
header, go to Global.asax
, find/create the Application_PreSendRequestHeaders
event and add a line as follows (thanks to BK and this blog this will also not fail on the Cassini / local dev):
protected void Application_PreSendRequestHeaders(object sender, EventArgs e)
{
// Remove the "Server" HTTP Header from response
HttpApplication app = sender as HttpApplication;
if (null != app && null != app.Request && !app.Request.IsLocal &&
null != app.Context && null != app.Context.Response)
{
NameValueCollection headers = app.Context.Response.Headers;
if (null != headers)
{
headers.Remove("Server");
}
}
}
If you want a complete solution to remove all related headers on Azure/IIS7 and also works with Cassini, see this link, which shows the best way to disable these headers without using HttpModules or URLScan.
You can select the rows from the table you want to export in the MySQL Workbench SQL Editor. You will find an Export button in the resultset that will allow you to export the records to a CSV file, as shown in the following image:
Please also keep in mind that by default MySQL Workbench limits the size of the resultset to 1000 records. You can easily change that in the Preferences dialog:
Hope this helps.
When checking for a column in another database, you can simply include the database name:
IF NOT EXISTS (
SELECT *
FROM DatabaseName.sys.columns
WHERE object_id = OBJECT_ID(N'[DatabaseName].[dbo].[TableName]')
AND name = 'ColumnName'
)
You could use the String.charAt(int index)
method result as the parameter for String.valueOf(char c).
String.valueOf(myString.charAt(3)) // This will return a string of the character on the 3rd position.
You can use in following manner to achieve difference between two Datetime Object. Suppose there are DateTime objects dt1 and dt2 then the code.
TimeSpan diff = dt2.Subtract(dt1);
I've historically rolled my own access at a low level (XML generation and parsing) to deal with the occasional need to do SOAP style requests from Objective-C. That said, there's a library available called SOAPClient (soapclient) that is open source (BSD licensed) and available on Google Code (mac-soapclient) that might be of interest.
I won't attest to it's abilities or effectiveness, as I've never used it or had to work with it's API's, but it is available and might provide a quick solution for you depending on your needs.
Apple had, at one time, a very broken utility called WS-MakeStubs. I don't think it's available on the iPhone, but you might also be interested in an open-source library intended to replace that - code generate out Objective-C for interacting with a SOAP client. Again, I haven't used it - but I've marked it down in my notes: wsdl2objc
Either create an auto-increment id or a UNIQUE id and add it to the natural key you are talking about with the 4 fields. this will make every row in the table unique...
Unfortunately, it's not completely portable. It's usually integral, but it can be any "integer or real-floating type".
What if you need any database operation in SuperGirl
you would inject it again into SuperGirl
.
I think the main idea is using the same object reference in different classes. So what about this:
//There is no annotation about Spring in the abstract part.
abstract class SuperMan {
private final DatabaseService databaseService;
public SuperMan(DatabaseService databaseService) {
this.databaseService = databaseService;
}
abstract void Fly();
protected void doSuperPowerAction(Thing thing) {
//busy code
databaseService.save(thing);
}
}
@Component
public class SuperGirl extends SuperMan {
private final DatabaseService databaseService;
@Autowired
public SuperGirl (DatabaseService databaseService) {
super(databaseService);
this.databaseService = databaseService;
}
@Override
public void Fly() {
//busy code
}
public doSomethingSuperGirlDoes() {
//busy code
doSuperPowerAction(thing)
}
In my opinion, inject once run everywhere :)
git stash drop
takes no parameter - which drops the top stash - or a stash reference which looks like: stash@{n}
which n
nominates which stash to drop. You can't pass a commit id to git stash drop
.
git stash drop # drop top hash, stash@{0}
git stash drop stash@{n} # drop specific stash - see git stash list
Dropping a stash will change the stash@{n}
designations of all stashes further down the stack.
I'm not sure why you think need to drop a stash because if you are using stash create
a stash entry isn't created for your "stash" so there isn't anything to drop.
I had the same problem with spring, commons-dbcp and oracle 10g. Using this URL I got the 'no suitable driver' error: jdbc:oracle:[email protected]:1521:kinangop
The above URL is missing a full colon just before the @. After correcting that, the error disappeared.
:match CursorLine /\s\+/
avoids the "you have to search for spaces to get them to show up" bit but afaict can't be configured to do non-hilighting things to the spaces. CursorLine can be any hilighting group and in the default theme it's a plain underline.
try: $('#editorTitle').attr('value')
?
In C and in C++ single quotes identify a single character, while double quotes create a string literal. 'a'
is a single a character literal, while "a"
is a string literal containing an 'a'
and a null terminator (that is a 2 char array).
In C++ the type of a character literal is char
, but note that in C, the type of a character literal is int
, that is sizeof 'a'
is 4 in an architecture where ints are 32bit (and CHAR_BIT is 8), while sizeof(char)
is 1 everywhere.
your string is NOT a valid json to start with.
a valid json will be,
{
"area": [
{
"area": "kothrud"
},
{
"area": "katraj"
}
]
}
if you do a json_decode
, it will yield,
stdClass Object
(
[area] => Array
(
[0] => stdClass Object
(
[area] => kothrud
)
[1] => stdClass Object
(
[area] => katraj
)
)
)
Update: to use
$string = '
{
"area": [
{
"area": "kothrud"
},
{
"area": "katraj"
}
]
}
';
$area = json_decode($string, true);
foreach($area['area'] as $i => $v)
{
echo $v['area'].'<br/>';
}
Output:
kothrud
katraj
Update #2:
for that true
:
When TRUE, returned objects will be converted into associative arrays. for more information, click here
ARIA (Accessible Rich Internet Applications) defines a way to make Web content and Web applications more accessible to people with disabilities.
The hidden
attribute is new in HTML5 and tells browsers not to display the element. The aria-hidden
property tells screen-readers if they should ignore the element. Have a look at the w3 docs for more details:
https://www.w3.org/WAI/PF/aria/states_and_properties#aria-hidden
Using these standards can make it easier for disabled people to use the web.
If you wish to mutate the same object rather than create a new one.
The following example will delete all 0 or empty values:
const sev = { a: 1, b: 0, c: 3 };
const deleteKeysBy = (obj, predicate) =>
Object.keys(obj)
.forEach( (key) => {
if (predicate(obj[key])) {
delete(obj[key]);
}
});
deleteKeysBy(sev, val => !val);
Your code "for /f "tokens=* delims=" %%x in (a.txt) do echo %%x" will work on most Windows Operating Systems unless you have modified commands.
So you could instead "cd" into the directory to read from before executing the "for /f" command to follow out the string. For instance if the file "a.txt" is located at C:\documents and settings\%USERNAME%\desktop\a.txt then you'd use the following.
cd "C:\documents and settings\%USERNAME%\desktop"
for /f "tokens=* delims=" %%x in (a.txt) do echo %%x
echo.
echo.
echo.
pause >nul
exit
But since this doesn't work on your computer for x reason there is an easier and more efficient way of doing this. Using the "type" command.
@echo off
color a
cls
cd "C:\documents and settings\%USERNAME%\desktop"
type a.txt
echo.
echo.
pause >nul
exit
Or if you'd like them to select the file from which to write in the batch you could do the following.
@echo off
:A
color a
cls
echo Choose the file that you want to read.
echo.
echo.
tree
echo.
echo.
echo.
set file=
set /p file=File:
cls
echo Reading from %file%
echo.
type %file%
echo.
echo.
echo.
set re=
set /p re=Y/N?:
if %re%==Y goto :A
if %re%==y goto :A
exit
Your Window is not implementing the necessary data binding notifications that the grid requires to use it as a data source, namely the INotifyPropertyChanged interface.
Your "Name2" string needs also to be a property and not a public variable, as data binding is for use with properties.
Implementing the necessary interfaces for using an object as a data source can be found here.
I think a C++ DLL is a machine code file. Therefore decompiling will only result in assembler code. If you can read that and create C++ from that you're good to go.
You inherit class attributes, not class constructors .This is how it goes :
If no constructor is added in the super class, if no then the compiler adds a no argument contructor. This default constructor is invoked implicitly whenever a new instance of the sub class is created . Here the sub class may or may not have constructor, all is ok .
if a constructor is provided in the super class, the compiler will see if it is a no arg constructor or a constructor with parameters.
if no args, then the compiler will invoke it for any sub class instanciation . Here also the sub class may or may not have constructor, all is ok .
if 1 or more contructors in the parent class have parameters and no args constructor is absent, then the subclass has to have at least 1 constructor where an implicit call for the parent class construct is made via super (parent_contractor params) .
this way you are sure that the inherited class attributes are always instanciated .
On Windows -- open my.ini file, present at "C:\ProgramData\MySQL\MySQL Server 5.6", find "[mysqld]" (without quotes) in next line add explicit_defaults_for_timestamp and then save the changes.
I'm guessing that when you enter a value into the totals field that value expression somehow gets overwritten.
However, you can take an alternative approach: Create a field for the total value and when either one
or two
changes update that field.
<li>Total <input type="text" ng-model="total">{{total}}</li>
And change the javascript:
function TodoCtrl($scope) {
$scope.$watch('one * two', function (value) {
$scope.total = value;
});
}
Example fiddle here.
Solution for swift 4.2
So, if you just want to answer the question whether the dictionary contains the key, ask:
let keyExists = dict[key] != nil
If you want the value and you know the dictionary contains the key, say:
let val = dict[key]!
But if, as usually happens, you don't know it contains the key - you want to fetch it and use it, but only if it exists - then use something like if let
:
if let val = dict[key] {
// now val is not nil and the Optional has been unwrapped, so use it
}
the string in your question is not a valid json string. From json.org website:
JSON is built on two structures:
* A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array. * An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.
Basically a json string will always start with either { or [.
Then as @Andy E and @Cryo said you can parse the string with json2.js or some other libraries.
IMHO you should avoid eval because it will any javascript program, so you might incur in security issues.
Instead of remotely putting popper js from CDN you can directly install it in your angular project.
Try this.
npm install popper.js --save
This query installs an updated version of popper.js Don't mention any version there, it will work for you.
Use p{width}
column specifier: e.g. \begin{tabular}{ l p{10cm} }
will put column's content into 10cm-wide parbox, and the text will be properly broken to several lines, like in normal paragraph.
You can also use tabular*
environment to specify width for the entire table.
If the seq
command available on your system:
for i in `seq 2 $max`
do
echo "output: $i"
done
If not, then use poor man's seq
with perl
:
seq=`perl -e "\$,=' ';print 2..$max"`
for i in $seq
do
echo "output: $i"
done
Watch those quote marks.
Usually, we want to divide our vector into a number of intervals. In this case, you can use a function where (a) is a vector and (b) is the number of intervals. (Let's suppose you want 4 intervals)
a <- 1:10
b <- 4
FunctionIntervalM <- function(a,b) {
seq(from=min(a), to = max(a), by = (max(a)-min(a))/b)
}
FunctionIntervalM(a,b)
# 1.00 3.25 5.50 7.75 10.00
Therefore you have 4 intervals:
1.00 - 3.25
3.25 - 5.50
5.50 - 7.75
7.75 - 10.00
You can also use a cut function
cut(a, 4)
# (0.991,3.25] (0.991,3.25] (0.991,3.25] (3.25,5.5] (3.25,5.5] (5.5,7.75]
# (5.5,7.75] (7.75,10] (7.75,10] (7.75,10]
#Levels: (0.991,3.25] (3.25,5.5] (5.5,7.75] (7.75,10]
I recommend php-ffmpeg library.
Extracting image
You can extract a frame at any timecode using the
FFMpeg\Media\Video::frame
method.This code returns a
FFMpeg\Media\Frame
instance corresponding to the second 42. You can pass anyFFMpeg\Coordinate\TimeCode
as argument, see dedicated documentation below for more information.
$frame = $video->frame(FFMpeg\Coordinate\TimeCode::fromSeconds(42));
$frame->save('image.jpg');
If you want to extract multiple images from the video, you can use the following filter:
$video
->filters()
->extractMultipleFrames(FFMpeg\Filters\Video\ExtractMultipleFramesFilter::FRAMERATE_EVERY_10SEC, '/path/to/destination/folder/')
->synchronize();
$video
->save(new FFMpeg\Format\Video\X264(), '/path/to/new/file');
By default, this will save the frames as jpg images.
You are able to override this using setFrameFileType
to save the frames in another format:
$frameFileType = 'jpg'; // either 'jpg', 'jpeg' or 'png'
$filter = new ExtractMultipleFramesFilter($frameRate, $destinationFolder);
$filter->setFrameFileType($frameFileType);
$video->addFilter($filter);
I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in. When I fixed the error in my template (it was wrong parameter map), the error disappeared
You must use $lastId = $this->db->insert_id();
import sys
print(sys.executable)
print(sys.version)
print(sys.version_info)
Seen below :- output when i run JupyterNotebook outside a CONDA venv
/home/dhankar/anaconda2/bin/python
2.7.12 |Anaconda 4.2.0 (64-bit)| (default, Jul 2 2016, 17:42:40)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0)
Seen below when i run same JupyterNoteBook within a CONDA Venv created with command --
conda create -n py35 python=3.5 ## Here - py35 , is name of my VENV
in my Jupyter Notebook it prints :-
/home/dhankar/anaconda2/envs/py35/bin/python
3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:53:06)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
sys.version_info(major=3, minor=5, micro=2, releaselevel='final', serial=0)
also if you already have various VENV's created with different versions of Python you switch to the desired Kernel by choosing KERNEL >> CHANGE KERNEL from within the JupyterNotebook menu... JupyterNotebookScreencapture
Also to install ipykernel within an existing CONDA Virtual Environment -
$ /path/to/python -m ipykernel install --help
usage: ipython-kernel-install [-h] [--user] [--name NAME]
[--display-name DISPLAY_NAME]
[--profile PROFILE] [--prefix PREFIX]
[--sys-prefix]
Install the IPython kernel spec.
optional arguments: -h, --help show this help message and exit --user Install for the current user instead of system-wide --name NAME Specify a name for the kernelspec. This is needed to have multiple IPython kernels at the same time. --display-name DISPLAY_NAME Specify the display name for the kernelspec. This is helpful when you have multiple IPython kernels. --profile PROFILE Specify an IPython profile to load. This can be used to create custom versions of the kernel. --prefix PREFIX Specify an install prefix for the kernelspec. This is needed to install into a non-default location, such as a conda/virtual-env. --sys-prefix Install to Python's sys.prefix. Shorthand for --prefix='/Users/bussonniermatthias/anaconda'. For use in conda/virtual-envs.
In many cases, I believe @allcaps's answer works well.
However, sometimes it is necessary to actually rename an app, e.g. to improve code readability or prevent confusion.
Most of the other answers involve either manual database manipulation or tinkering with existing migrations, which I do not like very much.
As an alternative, I like to create a new app with the desired name, copy everything over, make sure it works, then remove the original app:
Start a new app with the desired name, and copy all code from the original app into that. Make sure you fix the namespaced stuff, in the newly copied code, to match the new app name.
makemigrations
and migrate
Create a data migration that copies the relevant data from the original app's tables into the new app's tables, and migrate
again.
At this point, everything still works, because the original app and its data are still in place.
Now you can refactor all the dependent code, so it only makes use of the new app. See other answers for examples of what to look out for.
Once you are certain that everything works, you can remove the original app.
This has the advantage that every step uses the normal Django migration mechanism, without manual database manipulation, and we can track everything in source control. In addition, we keep the original app and its data in place until we are sure everything works.
Since python 3.6 you can use fstring :
>>> length = 1
>>> print(f'length = {length:03}')
length = 001
From the manual:
Variables in make can come from the environment in which make is run. Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment.
So you can do (from bash):
FOOBAR=1 make
resulting in a variable FOOBAR
in your Makefile.
In newer versions of VSCode this moved to a folder-specific configuration block.
Then add a "files.exclude" block, listing the directory globs you would like to exclude:
{
"files.exclude": {
"**/bin": true,
"**/obj": true
},
}
In my case, I was trying out Kotlin + Spring boot and I got into the Circular View Path issue. All the suggestions I got online could not help, until I tried the below:
Originally I had annotated my controller using @Controller
import org.springframework.stereotype.Controller
I then replaced @Controller
with @RestController
import org.springframework.web.bind.annotation.RestController
And it worked.
Leveraging David Dehghan's answer above, the following works in Python 2.7.13:
from PyPDF2 import PdfFileWriter, PdfFileReader, PdfFileMerger
import StringIO
from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import letter
packet = StringIO.StringIO()
# create a new PDF with Reportlab
can = canvas.Canvas(packet, pagesize=letter)
can.drawString(290, 720, "Hello world")
can.save()
#move to the beginning of the StringIO buffer
packet.seek(0)
new_pdf = PdfFileReader(packet)
# read your existing PDF
existing_pdf = PdfFileReader("original.pdf")
output = PdfFileWriter()
# add the "watermark" (which is the new pdf) on the existing page
page = existing_pdf.getPage(0)
page.mergePage(new_pdf.getPage(0))
output.addPage(page)
# finally, write "output" to a real file
outputStream = open("destination.pdf", "wb")
output.write(outputStream)
outputStream.close()
I invite you to explore what Python 3.8 has to offer for the subject matter in form of Structural subtyping (static duck typing)
(PEP 544)
See the short description https://docs.python.org/3/library/typing.html#typing.Protocol
For the simple example here it goes like this:
from typing import Protocol
class MyShowProto(Protocol):
def show(self):
...
class MyClass:
def show(self):
print('Hello World!')
class MyOtherClass:
pass
def foo(o: MyShowProto):
return o.show()
foo(MyClass()) # ok
foo(MyOtherClass()) # fails
foo(MyOtherClass())
will fail static type checks:
$ mypy proto-experiment.py
proto-experiment.py:21: error: Argument 1 to "foo" has incompatible type "MyOtherClass"; expected "MyShowProto"
Found 1 error in 1 file (checked 1 source file)
In addition, you can specify the base class explicitly, for instance:
class MyOtherClass(MyShowProto):
but note that this makes methods of the base class actually available on the subclass, and thus the static checker will not report that a method definition is missing on the MyOtherClass
.
So in this case, in order to get a useful type-checking, all the methods that we want to be explicitly implemented should be decorated with @abstractmethod
:
from typing import Protocol
from abc import abstractmethod
class MyShowProto(Protocol):
@abstractmethod
def show(self): raise NotImplementedError
class MyOtherClass(MyShowProto):
pass
MyOtherClass() # error in type checker
JavaScript 1.2 was introduced with Netscape Navigator 4 in 1997. That version number only ever had significance for Netscape browsers. For example, Microsoft's implementation (as used in Internet Explorer) is called JScript, and has its own version numbering which bears no relation to Netscape's numbering.
Teradata table or view stores NULL as "?" and SQL considers it as a character or string. This is the main reason for the error "The value violated the integrity constraints for the column." when data is ported from Teradata source to SQL destination. Solution 1: Allow the destination table to hold NULL Solution 2: Convert the '?' character to be stored as some value in the destination table.
You need antMatcher
for multiple HttpSecurity
, see Spring Security Reference:
5.7 Multiple HttpSecurity
We can configure multiple HttpSecurity instances just as we can have multiple
<http>
blocks. The key is to extend theWebSecurityConfigurationAdapter
multiple times. For example, the following is an example of having a different configuration for URL’s that start with/api/
.@EnableWebSecurity public class MultiHttpSecurityConfig { @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) { 1 auth .inMemoryAuthentication() .withUser("user").password("password").roles("USER").and() .withUser("admin").password("password").roles("USER", "ADMIN"); } @Configuration @Order(1) 2 public static class ApiWebSecurityConfigurationAdapter extends WebSecurityConfigurerAdapter { protected void configure(HttpSecurity http) throws Exception { http .antMatcher("/api/**") 3 .authorizeRequests() .anyRequest().hasRole("ADMIN") .and() .httpBasic(); } } @Configuration 4 public static class FormLoginWebSecurityConfigurerAdapter extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .anyRequest().authenticated() .and() .formLogin(); } } }
1 Configure Authentication as normal
2 Create an instance of
WebSecurityConfigurerAdapter
that contains@Order
to specify whichWebSecurityConfigurerAdapter
should be considered first.3 The
http.antMatcher
states that thisHttpSecurity
will only be applicable to URLs that start with/api/
4 Create another instance of
WebSecurityConfigurerAdapter
. If the URL does not start with/api/
this configuration will be used. This configuration is considered afterApiWebSecurityConfigurationAdapter
since it has an@Order
value after1
(no@Order
defaults to last).
In your case you need no antMatcher
, because you have only one configuration. Your modified code:
http
.authorizeRequests()
.antMatchers("/high_level_url_A/sub_level_1").hasRole('USER')
.antMatchers("/high_level_url_A/sub_level_2").hasRole('USER2')
.somethingElse() // for /high_level_url_A/**
.antMatchers("/high_level_url_A/**").authenticated()
.antMatchers("/high_level_url_B/sub_level_1").permitAll()
.antMatchers("/high_level_url_B/sub_level_2").hasRole('USER3')
.somethingElse() // for /high_level_url_B/**
.antMatchers("/high_level_url_B/**").authenticated()
.anyRequest().permitAll()
Using alpha 28, I accomplished programmatically subscribing to event emitters by way of the eventEmitter.toRx().subscribe(..)
method. As it is not intuitive, it may perhaps change in a future release.
Your approach is OK
Maybe slightly clearer (to me anyway!)
UPDATE
T1
SET
[Description] = t2.[Description]
FROM
Table1 T1
JOIN
[Table2] t2 ON t2.[ID] = t1.DescriptionID
Both this and your query should run the same performance wise because it is the same query, just laid out differently.
Try this: Goto project property -> C/C++ -> Code generation -> Runtime Library Select from combobox value : Multi-threaded DLL (/MD) It work for me :)
I'm using SSH to authenticate my GitHub account and have a couple dependencies in my project installed as follows:
"dependencies": {
"<dependency name>": "git+ssh://[email protected]/<github username>/<repository name>.git#<release version | branch>"
}
Easiest solution
<button type="button" onclick="window.location.href='{{ url_for( 'move_forward') }}';">Forward</button>
I know this question is a little old but the following worked for me in a Jupyter Notebook running pandas 0.22.0 and Python 3:
import pandas as pd
pd.set_option('display.max_columns', <number of columns>)
You can do the same for the rows too:
pd.set_option('display.max_rows', <number of rows>)
This saves importing IPython, and there are more options in the pandas.set_option documentation: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.set_option.html
URLs are not resolved based on the file structure in the server side. URLs are resolved based on the real public web addresses of the resources in question. It's namely the webbrowser who has got to invoke them, not the webserver.
There are several ways to soften the pain:
JSF EL offers a shorthand to ${pageContext.request}
in flavor of #{request}
:
<li><a href="#{request.contextPath}/index.xhtml">Home</a></li>
<li><a href="#{request.contextPath}/about_us.xhtml">About us</a></li>
You can if necessary use <c:set>
tag to make it yet shorter. Put it somewhere in the master template, it'll be available to all pages:
<c:set var="root" value="#{request.contextPath}/" />
...
<li><a href="#{root}index.xhtml">Home</a></li>
<li><a href="#{root}about_us.xhtml">About us</a></li>
JSF 2.x offers the <h:link>
which can take a view ID relative to the context root in outcome
and it will append the context path and FacesServlet
mapping automatically:
<li><h:link value="Home" outcome="index" /></li>
<li><h:link value="About us" outcome="about_us" /></li>
HTML offers the <base>
tag which makes all relative URLs in the document relative to this base. You could make use of it. Put it in the <h:head>
.
<base href="#{request.requestURL.substring(0, request.requestURL.length() - request.requestURI.length())}#{request.contextPath}/" />
...
<li><a href="index.xhtml">Home</a></li>
<li><a href="about_us.xhtml">About us</a></li>
(note: this requires EL 2.2, otherwise you'd better use JSTL fn:substring()
, see also this answer)
This should end up in the generated HTML something like as
<base href="http://example.com/webname/" />
Note that the <base>
tag has a caveat: it makes all jump anchors in the page like <a href="#top">
relative to it as well! See also Is it recommended to use the <base> html tag? In JSF you could solve it like <a href="#{request.requestURI}#top">top</a>
or <h:link value="top" fragment="top" />
.
In most database systems, IN (val1, val2, …)
and a series of OR
are optimized to the same plan.
The third way would be importing the list of values into a temporary table and join it which is more efficient in most systems, if there are lots of values.
You may want to read this articles:
an EXE
file is created as long as you build the project. you can usually find this on the debug folder of you project.
C:\Users\username\Documents\Visual Studio 2012\Projects\ProjectName\bin\Debug
There is a static method on the Thread
Class that will return the number of active threads controlled by the JVM:
Thread.activeCount()
Returns the number of active threads in the current thread's thread group.
Additionally, external debuggers should list all active threads (and allow you to suspend any number of them) if you wish to monitor them in real-time.
The short answer is primitives are data types, while references are pointers, which do not hold their values but point to their values and are used on/with objects.
Primatives:
boolean
character
byte
short
integer
long
float
double
Lots of good references that explain these basic concepts. http://www.javaforstudents.co.uk/Types
from http://www.scienceprimer.com/drawing-regular-polygons-javascript-canvas:
The following code will draw a hexagon. Change the number of sides to create different regular polygons.
var ctx = document.getElementById('hexagon').getContext('2d');_x000D_
_x000D_
// hexagon_x000D_
var numberOfSides = 6,_x000D_
size = 20,_x000D_
Xcenter = 25,_x000D_
Ycenter = 25;_x000D_
_x000D_
ctx.beginPath();_x000D_
ctx.moveTo (Xcenter + size * Math.cos(0), Ycenter + size * Math.sin(0)); _x000D_
_x000D_
for (var i = 1; i <= numberOfSides;i += 1) {_x000D_
ctx.lineTo (Xcenter + size * Math.cos(i * 2 * Math.PI / numberOfSides), Ycenter + size * Math.sin(i * 2 * Math.PI / numberOfSides));_x000D_
}_x000D_
_x000D_
ctx.strokeStyle = "#000000";_x000D_
ctx.lineWidth = 1;_x000D_
ctx.stroke();
_x000D_
#hexagon { border: thin dashed red; }
_x000D_
<canvas id="hexagon"></canvas>
_x000D_
I would recommend using the visual designer built into VS2008, as updating the dbml also updates the code that is generated for you. Modifying the dbml outside of the visual designer would result in the underlying code being out of sync.