I had this issue when the user connecting to the database had CONNECT permissions, but no permissions to read from the database. In my case, I could not even do something like this:
object userNameObj = command.ExecuteScalar()
Putting this in a try/catch (which you should probably be doing anyway) was the only way I could see to handle the insufficient permission issue.
In SQL Server
You can do this using With CTE
WITH common_table_expression (Transact-SQL)
CREATE TABLE tab(ColumnA DECIMAL(10,2), ColumnB DECIMAL(10,2), ColumnC DECIMAL(10,2))
INSERT INTO tab(ColumnA, ColumnB, ColumnC) VALUES (2, 10, 2),(3, 15, 6),(7, 14, 3)
WITH tab_CTE (ColumnA, ColumnB, ColumnC,calccolumn1)
AS
(
Select
ColumnA,
ColumnB,
ColumnC,
ColumnA + ColumnB As calccolumn1
from tab
)
SELECT
ColumnA,
ColumnB,
calccolumn1,
calccolumn1 / ColumnC AS calccolumn2
FROM tab_CTE
To follow up on Theo's suggestion with my findings (apologies - I don't currently have enough reputation to post this as a comment)
First, this is how to use several named parameters:
String commandString = "INSERT INTO Users (Name, Desk, UpdateTime) VALUES (:Name, :Desk, :UpdateTime)";
using (OracleCommand command = new OracleCommand(commandString, _connection, _transaction))
{
command.Parameters.Add("Name", OracleType.VarChar, 50).Value = strategy;
command.Parameters.Add("Desk", OracleType.VarChar, 50).Value = deskName ?? OracleString.Null;
command.Parameters.Add("UpdateTime", OracleType.DateTime).Value = updated;
command.ExecuteNonQuery();
}
However, I saw no variation in speed between:
I'm using System.Data.OracleClient, deleting and inserting 2500 rows inside a transaction
What you're trying to insert is not a date, I think, but a string. You need to use to_date()
function, like this:
insert into table t1 (id, date_field) values (1, to_date('20.06.2013', 'dd.mm.yyyy'));
REGEXP_COUNT wasn't added until Oracle 11i. Here's an Oracle 10g solution, adopted from Art's solution.
SELECT trim(regexp_substr('Err1, Err2, Err3', '[^,]+', 1, LEVEL)) str_2_tab
FROM dual
CONNECT BY LEVEL <=
LENGTH('Err1, Err2, Err3')
- LENGTH(REPLACE('Err1, Err2, Err3', ',', ''))
+ 1;
The usual way is to use UPDATE:
UPDATE mytable
SET new_column = <expr containing old_column>
You should be able to do this is a single transaction.
In 12c you can make use of the fact that columns which are set from invisible to visible are displayed as the last column of the table: Tips and Tricks: Invisible Columns in Oracle Database 12c
Maybe that is the 'trick' @jeffrey-kemp was talking about in his comment, but the link there does not work anymore.
Example:
ALTER TABLE my_tab ADD (col_3 NUMBER(10));
ALTER TABLE my_tab MODIFY (
col_1 invisible,
col_2 invisible
);
ALTER TABLE my_tab MODIFY (
col_1 visible,
col_2 visible
);
Now col_3 would be displayed first in a SELECT * FROM my_tab
statement.
Note: This does not change the physical order of the columns on disk, but in most cases that is not what you want to do anyway. If you really want to change the physical order, you can use the DBMS_REDEFINITION package.
SELECT * FROM all_procedures WHERE OBJECT_TYPE IN ('FUNCTION','PROCEDURE','PACKAGE')
and owner = 'Schema_name' order by object_name
here 'Schema_name' is a name of schema, example i have a schema named PMIS, so the example will be
SELECT * FROM all_procedures WHERE OBJECT_TYPE IN ('FUNCTION','PROCEDURE','PACKAGE')
and owner = 'PMIS' order by object_name
Ref: https://www.plsql.co/list-all-procedures-from-a-schema-of-oracle-database.html
Although @BrianHart 's answer is correct, if you are connecting from a remote host, you'll also need to allow remote hosts to connect to the MySQL/MariaDB database.
My article describes the full instructions to connect to a MySQL/MariaDB database in Oracle SQL Developer:
Keep in mind that SQL strings can not be larger than 4000 bytes, while Pl/SQL can have strings as large as 32767 bytes. see below for an example of inserting a large string via an anonymous block which I believe will do everything you need it to do.
note I changed the varchar2(32000) to CLOB
set serveroutput ON
CREATE TABLE testclob
(
id NUMBER,
c CLOB,
d VARCHAR2(4000)
);
DECLARE
reallybigtextstring CLOB := '123';
i INT;
BEGIN
WHILE Length(reallybigtextstring) <= 60000 LOOP
reallybigtextstring := reallybigtextstring
|| '000000000000000000000000000000000';
END LOOP;
INSERT INTO testclob
(id,
c,
d)
VALUES (0,
reallybigtextstring,
'done');
dbms_output.Put_line('I have finished inputting your clob: '
|| Length(reallybigtextstring));
END;
/
SELECT *
FROM testclob;
"I have finished inputting your clob: 60030"
you can use cross apply
:
select
a.x,
bb.y,
bb.z
from
a
cross apply
( select b.y, b.z
from b
where b.v = a.v
) bb
If there will be no row from b to mach row from a then cross apply
wont return row. If you need such a rows then use outer apply
If you need to find only one specific row for each of row from a, try:
cross apply
( select top 1 b.y, b.z
from b
where b.v = a.v
order by b.order
) bb
You forgot to put z as an bind variable.
The following EXECUTE command runs a PL/SQL statement that references a stored procedure:
SQL> EXECUTE -
> :Z := EMP_SALE.HIRE('JACK','MANAGER','JONES',2990,'SALES')
Note that the value returned by the stored procedure is being return into :Z
you can also check by
ps -ef |grep -i ora
SQL> -- original . . .
SQL> select
2 to_char( sysdate, 'Day "the" Ddth "of" Month, yyyy' ) dt
3 from dual;
DT
----------------------------------------
Friday the 13th of May , 2016
SQL>
SQL> -- collapse repeated spaces . . .
SQL> select
2 regexp_replace(
3 to_char( sysdate, 'Day "the" Ddth "of" Month, yyyy' ),
4 ' * *', ' ') datesp
5 from dual;
DATESP
----------------------------------------
Friday the 13th of May , 2016
SQL>
SQL> -- and space before commma . . .
SQL> select
2 regexp_replace(
3 to_char( sysdate, 'Day "the" Ddth "of" Month, yyyy' ),
4 ' *(,*) *', '\1 ') datesp
5 from dual;
DATESP
----------------------------------------
Friday the 13th of May, 2016
SQL>
SQL> -- space before punctuation . . .
SQL> select
2 regexp_replace(
3 to_char( sysdate, 'Day "the" Ddth "of" Month, yyyy' ),
4 ' *([.,/:;]*) *', '\1 ') datesp
5 from dual;
DATESP
----------------------------------------
Friday the 13th of May, 2016
http://asktom.oracle.com/tkyte/Misc/DateDiff.html - link dead as of 2012-01-30
Looks like this is the resource:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551242712657900129
Alternatively you can install the cx_Oracle module without the PIP using the following steps
Extract the tar using the following commands (Linux)
gunzip cx_Oracle-6.1.tar.gz
tar -xf cx_Oracle-6.1.tar
cd cx_Oracle-6.1
Build the module
python setup.py build
Install the module
python setup.py install
A very simple solution is to add the database name with your table name like if your DB name is DBMS
and table is info
then it will be DBMS.info
for any query.
If your query is
select * from STUDENTREC where ROLL_NO=1;
it might show an error but
select * from DBMS.STUDENTREC where ROLL_NO=1;
it doesn't because now actually your table is found.
For MS SQL Server:
select * from information_schema.columns where table_name = 'tableName'
The key checks for FAST REFRESH includes the following:
1) An Oracle materialized view log must be present for each base table.
2) The RowIDs of all the base tables must appear in the SELECT list of the MVIEW query definition.
3) If there are outer joins, unique constraints must be placed on the join columns of the inner table.
No 3 is easy to miss and worth highlighting here
Case 1 : Yes, this works fine.
Case 2 : This will fail with the error ORA-01441 : cannot decrease column length because some value is too big.
Share and enjoy.
This is a highly inefficient way of doing it. You can use the merge
statement and then there's no need for cursors, looping or (if you can do without) PL/SQL.
MERGE INTO studLoad l
USING ( SELECT studId, studName FROM student ) s
ON (l.studId = s.studId)
WHEN MATCHED THEN
UPDATE SET l.studName = s.studName
WHERE l.studName != s.studName
WHEN NOT MATCHED THEN
INSERT (l.studID, l.studName)
VALUES (s.studId, s.studName)
Make sure you commit
, once completed, in order to be able to see this in the database.
To actually answer your question I would do it something like as follows. This has the benefit of doing most of the work in SQL and only updating based on the rowid, a unique address in the table.
It declares a type, which you place the data within in bulk, 10,000 rows at a time. Then processes these rows individually.
However, as I say this will not be as efficient as merge
.
declare
cursor c_data is
select b.rowid as rid, a.studId, a.studName
from student a
left outer join studLoad b
on a.studId = b.studId
and a.studName <> b.studName
;
type t__data is table of c_data%rowtype index by binary_integer;
t_data t__data;
begin
open c_data;
loop
fetch c_data bulk collect into t_data limit 10000;
exit when t_data.count = 0;
for idx in t_data.first .. t_data.last loop
if t_data(idx).rid is null then
insert into studLoad (studId, studName)
values (t_data(idx).studId, t_data(idx).studName);
else
update studLoad
set studName = t_data(idx).studName
where rowid = t_data(idx).rid
;
end if;
end loop;
end loop;
close c_data;
end;
/
I had the same problem I used the solution offered above - I dropped the SYNONYM, created a VIEW with the same name as the synonym. it had a select using the dblink , and gave GRANT SELECT to the other schema It worked great.
SELECT A.ABC_ID, A.VAL FROM A WHERE NOT EXISTS
(SELECT * FROM B WHERE B.ABC_ID = A.ABC_ID AND B.VAL = A.VAL)
or
SELECT A.ABC_ID, A.VAL FROM A WHERE VAL NOT IN
(SELECT VAL FROM B WHERE B.ABC_ID = A.ABC_ID)
or
SELECT A.ABC_ID, A.VAL LEFT OUTER JOIN B
ON A.ABC_ID = B.ABC_ID AND A.VAL = B.VAL FROM A WHERE B.VAL IS NULL
Please note that these queries do not require that ABC_ID be in table B at all. I think that does what you want.
Adding to Mike McAllister's pretty-thorough answer...
Materialized views can only be set to refresh automatically through the database detecting changes when the view query is considered simple by the compiler. If it's considered too complex, it won't be able to set up what are essentially internal triggers to track changes in the source tables to only update the changed rows in the mview table.
When you create a materialized view, you'll find that Oracle creates both the mview and as a table with the same name, which can make things confusing.
It can also be due to a duplicate entry in any of the tables that are used.
Two possible approaches.
If you have a foreign key, declare it as on-delete-cascade and delete the parent rows older than 30 days. All the child rows will be deleted automatically.
Based on your description, it looks like you know the parent rows that you want to delete and need to delete the corresponding child rows. Have you tried SQL like this?
delete from child_table
where parent_id in (
select parent_id from parent_table
where updd_tms != (sysdate-30)
-- now delete the parent table records
delete from parent_table
where updd_tms != (sysdate-30);
---- Based on your requirement, it looks like you might have to use PL/SQL. I'll see if someone can post a pure SQL solution to this (in which case that would definitely be the way to go).
declare
v_sqlcode number;
PRAGMA EXCEPTION_INIT(foreign_key_violated, -02291);
begin
for v_rec in (select parent_id, child id from child_table
where updd_tms != (sysdate-30) ) loop
-- delete the children
delete from child_table where child_id = v_rec.child_id;
-- delete the parent. If we get foreign key violation,
-- stop this step and continue the loop
begin
delete from parent_table
where parent_id = v_rec.parent_id;
exception
when foreign_key_violated
then null;
end;
end loop;
end;
/
This is my aproximation:
Declare
Variableclob Clob;
Temp_Save Varchar2(32767); //whether it is greater than 4000
Begin
Select reportClob Into Temp_Save From Reporte Where Id=...;
Variableclob:=To_Clob(Temp_Save);
Dbms_Output.Put_Line(Variableclob);
End;
You should try sqlldr's SKIP_INDEX_MAINTENANCE parameter.
Here is how I solved the same problem using the Oracle.DataAccess.Client
Namespace.
using Oracle.DataAccess.Client;
string strConnection = ConfigurationManager.ConnectionStrings["oConnection"].ConnectionString;
dataConnection = new OracleConnectionStringBuilder(strConnection);
OracleConnection oConnection = new OracleConnection(dataConnection.ToString());
oConnection.Open();
OracleCommand tmpCommand = oConnection.CreateCommand();
tmpCommand.Parameters.Add("user", OracleDbType.Varchar2, txtUser.Text, ParameterDirection.Input);
tmpCommand.CommandText = "SELECT USER, PASS FROM TB_USERS WHERE USER = :1";
try
{
OracleDataReader tmpReader = tmpCommand.ExecuteReader(CommandBehavior.SingleRow);
if (tmpReader.HasRows)
{
// PT: IMPLEMENTE SEU CÓDIGO
// ES: IMPLEMENTAR EL CÓDIGO
// EN: IMPLEMENT YOUR CODE
}
}
catch(Exception e)
{
// PT: IMPLEMENTE SEU CÓDIGO
// ES: IMPLEMENTAR EL CÓDIGO
// EN: IMPLEMENT YOUR CODE
}
insert into TABLE_NAME
(COL1,COL2)
WITH
data AS
(
select 'some value' x from dual
union all
select 'another value' x from dual
)
SELECT my_seq.NEXTVAL, x
FROM data
;
I think that is what you want, but i don't have access to oracle to test it right now.
I'm running SQL Developer 17.2.0.188 build 188.1159 which does indeed contain data modeling capability. I just created a relational model diagram via the menu: File->Data Modeler->Import->Data Dictionary....
I also have the stand-alone Data Modeler, which does the same thing.
As the Data Modeler tutorial states:
Figure 4: Relational model and diagram for HR
The diagram you’ve generated is not an ERD. Logical models are higher abstractions. An ERD represents entities and their attributes and relations, whereas a relational or physical model represents tables, columns, and foreign keys."
I know this is an old question but I did try all the above answers but didnt work in my case. What ultimately helped me out is
SHOW PARAMETER instance_name
If you want to make your PK auto increment, you need to set the ID column property for that primary key.
See the picture below for better understanding.
// My source is: http://techatplay.wordpress.com/2013/11/22/oracle-sql-developer-create-auto-incrementing-primary-key/
An alternative solution is using an external table: http://www.orafaq.com/node/848
Use this when you have to do this import very often and very fast.
Another way is to use TRANSLATE:
TRANSLATE (col_name, 'x'||CHR(10)||CHR(13), 'x')
The 'x' is any character that you don't want translated to null, because TRANSLATE doesn't work right if the 3rd parameter is null.
This data not found causes because of some datatype we are using .
like select empid into v_test
above empid and v_test has to be number type , then only the data will be stored .
So keep track of the data type , when getting this error , may be this will help
This excellent answer to a similar question (that I could not find before, unfortunately) helped me solve the problem.
Copying Content from referenced answer :
SQL Developer will look in the following location in this order for a tnsnames.ora file
$HOME/.tnsnames.ora
$TNS_ADMIN/tnsnames.ora
TNS_ADMIN lookup key in the registry
/etc/tnsnames.ora ( non-windows )
$ORACLE_HOME/network/admin/tnsnames.ora
LocalMachine\SOFTWARE\ORACLE\ORACLE_HOME_KEY
LocalMachine\SOFTWARE\ORACLE\ORACLE_HOMEIf your tnsnames.ora file is not getting recognized, use the following procedure:
Define an environmental variable called TNS_ADMIN to point to the folder that contains your tnsnames.ora file.
In Windows, this is done by navigating to Control Panel > System > Advanced system settings > Environment Variables...
In Linux, define the TNS_ADMIN variable in the .profile file in your home directory.Confirm the os is recognizing this environmental variable
From the Windows command line: echo %TNS_ADMIN%
From linux: echo $TNS_ADMIN
Restart SQL Developer Now in SQL Developer right click on Connections and select New Connection.... Select TNS as connection type in the drop down box. Your entries from tnsnames.ora should now display here.
When i copied the date format for timestamp and used that for date, it did not work. But changing the date format to this (DD-MON-YY HH12:MI:SS AM) worked for me.
The change has to be made in Tools->Preferences-> search for NLS
The sql array type is not neccessary. Not if the element type is a primitive one. (Varchar, number, date,...)
Very basic sample:
declare
type TPidmList is table of sgbstdn.sgbstdn_pidm%type;
pidms TPidmList;
begin
select distinct sgbstdn_pidm
bulk collect into pidms
from sgbstdn
where sgbstdn_majr_code_1 = 'HS04'
and sgbstdn_program_1 = 'HSCOMPH';
-- do something with pidms
open :someCursor for
select value(t) pidm
from table(pidms) t;
end;
When you want to reuse it, then it might be interesting to know how that would look like. If you issue several commands than those could be grouped in a package. The private package variable trick from above has its downsides. When you add variables to a package, you give it state and now it doesn't act as a stateless bunch of functions but as some weird sort of singleton object instance instead.
e.g. When you recompile the body, it will raise exceptions in sessions that already used it before. (because the variable values got invalided)
However, you could declare the type in a package (or globally in sql), and use it as a paramter in methods that should use it.
create package Abc as
type TPidmList is table of sgbstdn.sgbstdn_pidm%type;
function CreateList(majorCode in Varchar,
program in Varchar) return TPidmList;
function Test1(list in TPidmList) return PLS_Integer;
-- "in" to make it immutable so that PL/SQL can pass a pointer instead of a copy
procedure Test2(list in TPidmList);
end;
create package body Abc as
function CreateList(majorCode in Varchar,
program in Varchar) return TPidmList is
result TPidmList;
begin
select distinct sgbstdn_pidm
bulk collect into result
from sgbstdn
where sgbstdn_majr_code_1 = majorCode
and sgbstdn_program_1 = program;
return result;
end;
function Test1(list in TPidmList) return PLS_Integer is
result PLS_Integer := 0;
begin
if list is null or list.Count = 0 then
return result;
end if;
for i in list.First .. list.Last loop
if ... then
result := result + list(i);
end if;
end loop;
end;
procedure Test2(list in TPidmList) as
begin
...
end;
return result;
end;
How to call it:
declare
pidms constant Abc.TPidmList := Abc.CreateList('HS04', 'HSCOMPH');
xyz PLS_Integer;
begin
Abc.Test2(pidms);
xyz := Abc.Test1(pidms);
...
open :someCursor for
select value(t) as Pidm,
xyz as SomeValue
from table(pidms) t;
end;
You are doing a cartesian join. This means that if you wouldn't have even have the single where clause, the number of results you get would be book_customer size times books size times book_order size times publisher size.
In order words, the result set gets blown up because you didn't add meaningful join clauses. Your correct query should look something like this:
SELECT bc.firstname, bc.lastname, b.title, TO_CHAR(bo.orderdate, 'MM/DD/YYYY') "Order Date", p.publishername
FROM book_customer bc, books b, book_order bo, publisher p
WHERE bc.book_id = b.book_id
AND bo.book_id = b.book_id
(etc.)
AND publishername = 'PRINTING IS US';
Note: usually it is adviced to not use the implicit joins like in this query, but use the INNER JOIN
syntax. I am assuming however, that this syntax is used in your study material so I've left it in.
In addition to grants, you can try creating synonyms. It will avoid the need for specifying the table owner schema every time.
From the connecting schema:
CREATE SYNONYM pi_int FOR pct.pi_int;
Then you can query pi_int
as:
SELECT * FROM pi_int;
Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable.
ALL_TAB_COLUMNS
should be queryable from PL/SQL. DESC
is a SQL*Plus command.
SQL> desc all_tab_columns;
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
COLUMN_NAME NOT NULL VARCHAR2(30)
DATA_TYPE VARCHAR2(106)
DATA_TYPE_MOD VARCHAR2(3)
DATA_TYPE_OWNER VARCHAR2(30)
DATA_LENGTH NOT NULL NUMBER
DATA_PRECISION NUMBER
DATA_SCALE NUMBER
NULLABLE VARCHAR2(1)
COLUMN_ID NUMBER
DEFAULT_LENGTH NUMBER
DATA_DEFAULT LONG
NUM_DISTINCT NUMBER
LOW_VALUE RAW(32)
HIGH_VALUE RAW(32)
DENSITY NUMBER
NUM_NULLS NUMBER
NUM_BUCKETS NUMBER
LAST_ANALYZED DATE
SAMPLE_SIZE NUMBER
CHARACTER_SET_NAME VARCHAR2(44)
CHAR_COL_DECL_LENGTH NUMBER
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
AVG_COL_LEN NUMBER
CHAR_LENGTH NUMBER
CHAR_USED VARCHAR2(1)
V80_FMT_IMAGE VARCHAR2(3)
DATA_UPGRADED VARCHAR2(3)
HISTOGRAM VARCHAR2(15)
You can also use a "here document" to do the same thing:
VARIABLE=SOMEVALUE
sqlplus connectioninfo << HERE
start file1.sql
start file2.sql $VARIABLE
quit
HERE
For SQL Server, a generic way to go by row number is as such:
SET ROWCOUNT @row --@row = the row number you wish to work on.
For Example:
set rowcount 20 --sets row to 20th row
select meat, cheese from dbo.sandwich --select columns from table at 20th row
set rowcount 0 --sets rowcount back to all rows
This will return the 20th row's information. Be sure to put in the rowcount 0 afterward.
SQL> SELECT TO_CHAR(date '1982-03-09', 'DAY') day FROM dual;
DAY
---------
TUESDAY
SQL> SELECT TO_CHAR(date '1982-03-09', 'DY') day FROM dual;
DAY
---
TUE
SQL> SELECT TO_CHAR(date '1982-03-09', 'Dy') day FROM dual;
DAY
---
Tue
(Note that the queries use ANSI date literals, which follow the ISO-8601 date standard and avoid date format ambiguity.)
1. solution
select * from emp
where rowid not in
(select max(rowid) from emp group by empno);
Try this:
SELECT *
FROM (SELECT * FROM (
SELECT
id,
client_id,
create_time,
ROW_NUMBER() OVER(PARTITION BY client_id ORDER BY create_time DESC) rn
FROM order
)
WHERE rn=1
ORDER BY create_time desc) alias_name
WHERE rownum <= 100
ORDER BY rownum;
Or TOP:
SELECT TOP 2 * FROM Customers; //But not supported in Oracle
NOTE: I suppose that your internal query is fine. Please share your output of this.
You can use it to transform some aggregate functions into analytic:
SELECT MAX(date)
FROM mytable
will return 1
row with a single maximum,
SELECT MAX(date) OVER (ORDER BY id)
FROM mytable
will return all rows with a running maximum.
To remove all objects in oracle :
1) Dynamic
DECLARE
CURSOR IX IS
SELECT * FROM ALL_OBJECTS WHERE OBJECT_TYPE ='TABLE'
AND OWNER='SCHEMA_NAME';
CURSOR IY IS
SELECT * FROM ALL_OBJECTS WHERE OBJECT_TYPE
IN ('SEQUENCE',
'PROCEDURE',
'PACKAGE',
'FUNCTION',
'VIEW') AND OWNER='SCHEMA_NAME';
CURSOR IZ IS
SELECT * FROM ALL_OBJECTS WHERE OBJECT_TYPE IN ('TYPE') AND OWNER='SCHEMA_NAME';
BEGIN
FOR X IN IX LOOP
EXECUTE IMMEDIATE('DROP '||X.OBJECT_TYPE||' SCHEMA_NAME.'||X.OBJECT_NAME|| ' CASCADE CONSTRAINT');
END LOOP;
FOR Y IN IY LOOP
EXECUTE IMMEDIATE('DROP '||Y.OBJECT_TYPE||' SCHEMA_NAME.'||Y.OBJECT_NAME);
END LOOP;
FOR Z IN IZ LOOP
EXECUTE IMMEDIATE('DROP '||Z.OBJECT_TYPE||' SCHEMA_NAME.'||Z.OBJECT_NAME||' FORCE ');
END LOOP;
END;
/
2)Static
SELECT 'DROP TABLE "' || TABLE_NAME || '" CASCADE CONSTRAINTS;' FROM user_tables
union ALL
select 'drop '||object_type||' '|| object_name || ';' from user_objects
where object_type in ('VIEW','PACKAGE','SEQUENCE', 'PROCEDURE', 'FUNCTION')
union ALL
SELECT 'drop '
||object_type
||' '
|| object_name
|| ' force;'
FROM user_objects
WHERE object_type IN ('TYPE');
You can try this:
SELECT TO_CHAR(dbms_lob.substr(BLOB_FIELD, 3900)) FROM TABLE_WITH_BLOB;
However, It would be limited to 4000 byte
In my case this was because a file named ociw32.dll had been placed in c:\windows\system32. This is however only allowed to exist in c:\oracle\11.2.0.3\bin.
Deleting the file from system32, which had been placed there by an installation of Crystal Reports, fixed this issue
I have been looking for the same but I ended up writing a procedure to help me out:
CREATE OR REPLACE PROCEDURE DelObject(ObjName varchar2,ObjType varchar2)
IS
v_counter number := 0;
begin
if ObjType = 'TABLE' then
select count(*) into v_counter from user_tables where table_name = upper(ObjName);
if v_counter > 0 then
execute immediate 'drop table ' || ObjName || ' cascade constraints';
end if;
end if;
if ObjType = 'PROCEDURE' then
select count(*) into v_counter from User_Objects where object_type = 'PROCEDURE' and OBJECT_NAME = upper(ObjName);
if v_counter > 0 then
execute immediate 'DROP PROCEDURE ' || ObjName;
end if;
end if;
if ObjType = 'FUNCTION' then
select count(*) into v_counter from User_Objects where object_type = 'FUNCTION' and OBJECT_NAME = upper(ObjName);
if v_counter > 0 then
execute immediate 'DROP FUNCTION ' || ObjName;
end if;
end if;
if ObjType = 'TRIGGER' then
select count(*) into v_counter from User_Triggers where TRIGGER_NAME = upper(ObjName);
if v_counter > 0 then
execute immediate 'DROP TRIGGER ' || ObjName;
end if;
end if;
if ObjType = 'VIEW' then
select count(*) into v_counter from User_Views where VIEW_NAME = upper(ObjName);
if v_counter > 0 then
execute immediate 'DROP VIEW ' || ObjName;
end if;
end if;
if ObjType = 'SEQUENCE' then
select count(*) into v_counter from user_sequences where sequence_name = upper(ObjName);
if v_counter > 0 then
execute immediate 'DROP SEQUENCE ' || ObjName;
end if;
end if;
end;
Hope this helps
You are pretty confused my friend. There are no LOOPS in SQL, only in PL/SQL. Here's a few examples based on existing Oracle table - copy/paste to see results:
-- Numeric FOR loop --
set serveroutput on -->> do not use in TOAD --
DECLARE
k NUMBER:= 0;
BEGIN
FOR i IN 1..10 LOOP
k:= k+1;
dbms_output.put_line(i||' '||k);
END LOOP;
END;
/
-- Cursor FOR loop --
set serveroutput on
DECLARE
CURSOR c1 IS SELECT * FROM scott.emp;
i NUMBER:= 0;
BEGIN
FOR e_rec IN c1 LOOP
i:= i+1;
dbms_output.put_line(i||chr(9)||e_rec.empno||chr(9)||e_rec.ename);
END LOOP;
END;
/
-- SQL example to generate 10 rows --
SELECT 1 + LEVEL-1 idx
FROM dual
CONNECT BY LEVEL <= 10
/
The oracle tag was not on the question when this answer was offered, and apparently it doesn't work with oracle, but it does work with at least postgres and mysql
No, just use the value directly:
begin
if (select count(*) from table) > 0 then
update table
end if;
end;
Note there is no need for an "else".
You can simply do it all within the update statement (ie no if
construct):
update table
set ...
where ...
and exists (select 'x' from table where ...)
Put the values in a temporary table and then do a select where id in (select id from temptable)
Best way is,
SELECT to_number(replace(:Str,',','')/100) --into num2
FROM dual;
You cannot access a local directory from pl/sql. If you use bfile, you will setup a directory (create directory) on the server where Oracle is running where you will need to put your images.
If you want to insert a handful of images from your local machine, you'll need a client side app to do this. You can write your own, but I typically use Toad for this. In schema browser, click onto the table. Click the data tab, and hit + sign to add a row. Double click the BLOB column, and a wizard opens. The far left icon will load an image into the blob:
SQL Developer has a similar feature. See the "Load" link below:
If you need to pull images over the wire, you can do it using pl/sql, but its not straight forward. First, you'll need to setup ACL list access (for security reasons) to allow a user to pull over the wire. See this article for more on ACL setup.
Assuming ACL is complete, you'd pull the image like this:
declare
l_url varchar2(4000) := 'http://www.oracleimg.com/us/assets/12_c_navbnr.jpg';
l_http_request UTL_HTTP.req;
l_http_response UTL_HTTP.resp;
l_raw RAW(2000);
l_blob BLOB;
begin
-- Important: setup ACL access list first!
DBMS_LOB.createtemporary(l_blob, FALSE);
l_http_request := UTL_HTTP.begin_request(l_url);
l_http_response := UTL_HTTP.get_response(l_http_request);
-- Copy the response into the BLOB.
BEGIN
LOOP
UTL_HTTP.read_raw(l_http_response, l_raw, 2000);
DBMS_LOB.writeappend (l_blob, UTL_RAW.length(l_raw), l_raw);
END LOOP;
EXCEPTION
WHEN UTL_HTTP.end_of_body THEN
UTL_HTTP.end_response(l_http_response);
END;
insert into my_pics (pic_id, pic) values (102, l_blob);
commit;
DBMS_LOB.freetemporary(l_blob);
end;
Hope that helps.
The problem was the buggy implementation of SequenceExists in Liquibase. Since the changesets with these statements took a very long time and was accidently aborted. Then the next try executing the liquibase-scripts the lock was held.
<changeSet author="user" id="123">
<preConditions onFail="CONTINUE">
<not><sequenceExists sequenceName="SEQUENCE_NAME_SEQ" /></not>
</preConditions>
<createSequence sequenceName="SEQUENCE_NAME_SEQ"/>
</changeSet>
A work around is using plain SQL to check this instead:
<changeSet author="user" id="123">
<preConditions onFail="CONTINUE">
<sqlCheck expectedResult="0">
select count(*) from user_sequences where sequence_name = 'SEQUENCE_NAME_SEQ';
</sqlCheck>
</preConditions>
<createSequence sequenceName="SEQUENCE_NAME_SEQ"/>
</changeSet>
Lockdata is stored in the table DATABASECHANGELOCK. To get rid of the lock you just change 1 to 0 or drop that table and recreate.
Rank and Dense rank gives the rank in the partitioned dataset.
Rank() : It doesn't give you consecutive integer numbers.
Dense_rank() : It gives you consecutive integer numbers.
In above picture , the rank of 10008 zip is 2 by dense_rank() function and 24 by rank() function as it considers the row_number.
Here other solution to only unlock the blocked user. From your command prompt log as SYSDBA:
sqlplus "/ as sysdba"
Then type the following command:
alter user <your_username> account unlock;
As you can see by reading the other answers, there are a lot of options available. If you are just doing < 10k rows you should go with the second option.
In short, for approx > 10k all the way to say a <100k. It is kind of a gray area. A lot of old geezers will bark at big rollback segments. But honestly hardware and software have made amazing progress to where you may be able to get away with option 2 for a lot of records if you only run the code a few times. Otherwise you should probably commit every 1k-10k or so rows. Here is a snippet that I use. I like it because it is short and I don't have to declare a cursor. Plus it has the benefits of bulk collect and forall.
begin
for r in (select rownum rn, t.* from foo t) loop
insert into bar (A,B,C) values (r.A,r.B,r.C);
if mod(rn,1000)=0 then
commit;
end if;
end;
commit;
end;
I found this link from the oracle site that illustrates the options in more detail.
Try this to move your table (tbl1) to tablespace (tblspc2).
alter table tb11 move tablespace tblspc2;
From my understanding, all the SQL statement don't need forward slash as they will run automatically at the end of semicolons, including DDL, DML, DCL and TCL statements.
For other PL/SQL blocks, including Procedures, Functions, Packages and Triggers, because they are multiple line programs, Oracle need a way to know when to run the block, so we have to write a forward slash at the end of each block to let Oracle run it.
For everyone coming to this thread with fractional seconds in your timestamp use:
to_timestamp('2018-11-03 12:35:20.419000', 'YYYY-MM-DD HH24:MI:SS.FF')
Here is an alternative way:
select * from tbl where col like 'ABC%'
union
select * from tbl where col like 'XYZ%'
union
select * from tbl where col like 'PQR%';
Here is the test code to verify:
create table tbl (col varchar(255));
insert into tbl (col) values ('ABCDEFG'), ('HIJKLMNO'), ('PQRSTUVW'), ('XYZ');
select * from tbl where col like 'ABC%'
union
select * from tbl where col like 'XYZ%'
union
select * from tbl where col like 'PQR%';
+----------+
| col |
+----------+
| ABCDEFG |
| XYZ |
| PQRSTUVW |
+----------+
3 rows in set (0.00 sec)
alter table table_name rename column oldColumn to newColumn;
I had the same issue on a Windows 10 PC. I copied the project from my old computer to the new one, both 64 bits, and I installed the Oracle Client 64 bit on the new machine. I got the same error message, but after trying many solutions to no effect, what actually worked for me was this: In your Visual Studio (mine is 2017) go to Tools > Options > Projects and Solutions > Web Projects
On that page, check the option that says: Use the 64 bit version of IIS Express for Websites and Projects
select to_char(to_date('1/21/2000','mm/dd/yyyy'),'dd-mm-yyyy') from dual
WHERE 1 = 0
or similar false conditions work, but I dislike how they look. Marginally cleaner code for Oracle 12c+ IMHO is
CREATE TABLE bar AS
SELECT *
FROM foo
FETCH FIRST 0 ROWS ONLY;
Same limitations apply: only column definitions and their nullability are copied into a new table.
It is specific from your driver. You need to supply a parameter in your Java program to tell it the time zone you want to use.
java -Duser.timezone="America/New_York" GetCurrentDateTimeZone
Further this:
to_char(new_time(sched_start_time, 'CURRENT_TIMEZONE', 'NEW_TIMEZONE'), 'MM/DD/YY HH:MI AM')
May also be of value in handling the conversion properly. Taken from here
Here is an article on how to check and or install new patches :
To find the OPatch tool setup your database enviroment variables and then issue this comand:
cd $ORACLE_HOME/OPatch
> pwd
/oracle/app/product/10.2.0/db_1/OPatch
To list all the patches applies to your database use the lsinventory
option:
[oracle@DCG023 8828328]$ opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.
Oracle Home : /u00/product/11.2.0/dbhome_1
Central Inventory : /u00/oraInventory
from : /u00/product/11.2.0/dbhome_1/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.1.0
Log file location : /u00/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2013-11-13_13-55-22PM_1.log
Lsinventory Output file location : /u00/product/11.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2013-11-13_13-55-22PM.txt
Installed Top-level Products (1):
Oracle Database 11g 11.2.0.1.0
There are 1 products installed in this Oracle Home.
Interim patches (1) :
Patch 8405205 : applied on Mon Aug 19 15:18:04 BRT 2013
Unique Patch ID: 11805160
Created on 23 Sep 2009, 02:41:32 hrs PST8PDT
Bugs fixed:
8405205
OPatch succeeded.
To list the patches using sql :
select * from registry$history;
prefer to use "set colsep" in sqlplus prompt instead of editing col name one by one. Use sed to edit the output file.
set colsep '","' -- separate columns with a comma
sed 's/^/"/;s/$/"/;s/\s *"/"/g;s/"\s */"/g' $outfile > $outfile.csv
If any row would do, try:
select max(user)
from table;
No where clause.
Seems like the only way to get decimal in a pretty (for me) form requires some ridiculous code.
The only solution I got so far:
CASE WHEN xy>0 and xy<1 then '0' || to_char(xy) else to_char(xy)
xy
is a decimal.
xy query result
0.8 0.8 --not sth like .80
10 10 --not sth like 10.00
My Oracle is a bit rusty, but I think this would work:
SELECT * FROM TableA
WHERE ROWID IN ( SELECT MAX(ROWID) FROM TableA GROUP BY Language )
select * FROM doc_tab
PIVOT
(
Min(document_id)
FOR document_type IN ('Voters ID','Pan card','Drivers licence')
)
outputs as this
You can use the below query to get a list of table names which uses the specific column in DB2:
SELECT TBNAME
FROM SYSIBM.SYSCOLUMNS
WHERE NAME LIKE '%COLUMN_NAME';
Note : Here replace the COLUMN_NAME
with the column name that you are searching for.
One thing that was super easy and worked well for me was doing a TNSPing from a cmd prompt:
TNS Ping Utility for 32-bit Windows: Version 11.2.0.3.0 - Production on 13-MAR-2015 16:35:32
I think of it as a large array of binary data. The usability of BLOB follows immediately from the limited bandwidth of the DB interface, it is not determined by the DB storage mechanisms. No matter how you store the large piece of data, the only way to store and retrieve is the narrow database interface. The database is a bottleneck of the system. Why to use it as a file server, which can easily be distributed? Normally you do not want to download the BLOB. You just want the DB to store your BLOB urls. Deposite the BLOBs on a separate file server. Then, you reliefe the precious DB connection and provide unlimited bandwidth for large objects. This creates some issue of coherence though.
I too had the same problem when i tried to create connection in JDeveloper. Our server located in different timezone and hence it raised the below errors as:
ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region not found
I referred many forums which asked to include timezone in the Java Options(Run/Debug/Profile) of Project properties and Default Project properties as -Duser.timezone="+02:00"
bBut it didn't work for me. Finally the following solution worked for me.
Add the following line to the JDeveloper's configuration file (jdev.conf).
AddVMOption -Duser.timezone=UTC+02:00
The file is located in "<oracle installation root>\Middleware\jdeveloper\jdev\bin\jdev.conf".
getdate()
for MS-SQL, sysdate
for Oracle server
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Management.Common;
using System.IO;
using System.Data.SqlClient;
public partial class ExcuteScript : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string sqlConnectionString = @"Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=ccwebgrity;Data Source=SURAJIT\SQLEXPRESS";
string script = File.ReadAllText(@"E:\Project Docs\MX462-PD\MX756_ModMappings1.sql");
SqlConnection conn = new SqlConnection(sqlConnectionString);
Server server = new Server(new ServerConnection(conn));
server.ConnectionContext.ExecuteNonQuery(script);
}
}
The only subquery appears to be this - try adding a ROWNUM
limit to the where to be sure:
(SELECT C.I_WORKDATE
FROM T_COMPENSATION C
WHERE C.I_COMPENSATEDDATE = A.I_REQDATE AND ROWNUM <= 1
AND C.I_EMPID = A.I_EMPID)
You do need to investigate why this isn't unique, however - e.g. the employee might have had more than one C.I_COMPENSATEDDATE
on the matched date.
For performance reasons, you should also see if the lookup subquery can be rearranged into an inner / left join, i.e.
SELECT
...
REPLACE(TO_CHAR(C.I_WORKDATE, 'DD-Mon-YYYY'),
' ',
'') AS WORKDATE,
...
INNER JOIN T_EMPLOYEE_MS E
...
LEFT OUTER JOIN T_COMPENSATION C
ON C.I_COMPENSATEDDATE = A.I_REQDATE
AND C.I_EMPID = A.I_EMPID
...
select v.SQL_TEXT,
v.PARSING_SCHEMA_NAME,
v.FIRST_LOAD_TIME,
v.DISK_READS,
v.ROWS_PROCESSED,
v.ELAPSED_TIME,
v.service
from v$sql v
where to_date(v.FIRST_LOAD_TIME,'YYYY-MM-DD hh24:mi:ss')>ADD_MONTHS(trunc(sysdate,'MM'),-2)
where
clause is optional. You can sort the results according to FIRST_LOAD_TIME and find the records up to 2 months ago.
Use the REPLACE function.
eg: SELECT REPLACE ('t?es?t', '?', 'w');
declare
x number;
begin
x := myfunc(myargs);
end;
Alternatively:
select myfunc(myargs) from dual;
We had this error on Oracle RAC 11g on Windows, and the solution was to create the same OS directory tree and external file on both nodes.
alter sequence serial restart start with 1;
This feature was officially added in 18c but is unofficially available since 12.1.
It is arguably safe to use this undocumented feature in 12.1. Even though the syntax is not included in the official documentation, it is generated by the Oracle package DBMS_METADATA_DIFF. I've used it several times on production systems. However, I created an Oracle Service request and they verified that it's not a documentation bug, the feature is truly unsupported.
In 18c, the feature does not appear in the SQL Language Syntax, but is included in the Database Administrator's Guide.
This is not an answer, really and I would have entered it as a comment had the question not been locked. This answers the question:
Why would you want it?
Assume you have a table with the sequence as the primary key and the sequence is generated by an insert trigger. If you wanted to have the sequence available for subsequent updates to the record, you need to have a way to extract that value.
In order to make sure you get the right one, you might want to wrap the INSERT and RonK's query in a transaction.
RonK's Query:
select MY_SEQ_NAME.currval from DUAL;
In the above scenario, RonK's caveat does not apply since the insert and update would happen in the same session.
How was the database exported?
If it was exported using exp
and a full schema was exported, then
Create the user:
create user <username> identified by <password> default tablespace <tablespacename> quota unlimited on <tablespacename>;
Grant the rights:
grant connect, create session, imp_full_database to <username>;
Start the import with imp
:
imp <username>/<password>@<hostname> file=<filename>.dmp log=<filename>.log full=y;
If it was exported using expdp
, then start the import with impdp
:
impdp <username>/<password> directory=<directoryname> dumpfile=<filename>.dmp logfile=<filename>.log full=y;
Looking at the error log, it seems you have not specified the directory, so Oracle tries to find the dmp
file in the default directory (i.e., E:\app\Vensi\admin\oratest\dpdump\
).
Either move the export file to the above path or create a directory object to pointing to the path where the dmp
file is present and pass the object name to the impdp
command above.
This may also happen if you have a faulty or accidental equation in your csv file. i.e - One of the cells in your csv file starts with an equals sign (=) (An excel equation) which will, in turn throw an error. If you fix, or remove this equation by getting rid of the equals sign, it should solve the ORA-06502 error.
In addition to the Oracle instant client, you may also need to install the Oracle ODAC components and put the path to them into your system path. cx_Oracle seems to need access to the oci.dll file that is installed with them.
Also check that you get the correct version (32bit or 64bit) of them that matches your: python, cx_Oracle, and instant client versions.
The solution I opted for was to format the date with the mysql query :
String l_mysqlQuery = "SELECT DATE_FORMAT(time, '%Y-%m-%d %H:%i:%s') FROM uld_departure;"
l_importedTable = fStatement.executeQuery( l_mysqlQuery );
System.out.println(l_importedTable.getString( timeIndex));
I had the exact same issue.
Even though my mysql table contains dates formatted as such : 2017-01-01 21:02:50
String l_mysqlQuery = "SELECT time FROM uld_departure;"
l_importedTable = fStatement.executeQuery( l_mysqlQuery );
System.out.println(l_importedTable.getString( timeIndex));
was returning a date formatted as such :
2017-01-01 21:02:50.0
In Oracle 12c you can also declare an identity column
CREATE TABLE identity_test_tab (
id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
description VARCHAR2(30)
);
examples & performance tests here ... where, is shorts, the conclusion is that the direct use of the sequence or the new identity column are much faster than the triggers.
SELECT A.identifier
, A.name
, TO_NUMBER(DECODE( A.month_no
, 1, 200803
, 2, 200804
, 3, 200805
, 4, 200806
, 5, 200807
, 6, 200808
, 7, 200809
, 8, 200810
, 9, 200811
, 10, 200812
, 11, 200701
, 12, 200702
, NULL)) as MONTH_NO
, TO_NUMBER(TO_CHAR(B.last_update_date, 'YYYYMM')) as UPD_DATE
FROM table_a A, table_b B
WHERE .identifier = B.identifier
HAVING MONTH_NO > UPD_DATE
Try this:
TO_DATE('2011-07-28T23:54:14Z', 'YYYY-MM-DD"T"HH24:MI:SS"Z"')
I have found that if I save my query(spool_script_file.sql) and call it using this
@c:\client\queries\spool_script_file.sql as script(F5)
My output now is just the results with out the commands at the top.
I found this solution on the oracle forums.
PERMISSIONS: I want to stress the importance of permissions for "sqlplus".
For any "Other" UNIX user other than the Owner/Group to be able to run sqlplus and access an ORACLE database , read/execute permissions are required (rx) for these 4 directories :
$ORACLE_HOME/bin , $ORACLE_HOME/lib, $ORACLE_HOME/oracore, $ORACLE_HOME/sqlplus
Environment. Set those properly:
A. ORACLE_HOME
(example: ORACLE_HOME=/u01/app/oranpgm/product/12.1.0/PRMNRDEV/
)
B. LD_LIBRARY_PATH
(example: ORACLE_HOME=/u01/app/oranpgm/product/12.1.0/PRMNRDEV/lib
)
C. ORACLE_SID
D. PATH
export PATH="$ORACLE_HOME/bin:$PATH"
If the view is accessed via a stored procedure, the execute grant is insufficient to access the view. You must grant select explicitly.
simply type this
grant all on to public;
Try this:
-(void)textFieldDidBeginEditing:(UITextField *)sender
{
if ([sender isEqual:self.m_Sp_Contact])
{
[self.m_Scroller setContentOffset:CGPointMake(0, 105)animated:YES];
}
}
Super late here and I still couldn't uninstall using sudo
as the other answers suggest. What did it for me was checking where cordova
was installed by running
which cordova
it will output something like this
/usr/local/bin/
then removing by
rm -rf /usr/local/bin/cordova
Based on Mohamed23gharbi's answer:
function change(selector, value) {
var sortBySelect = document.querySelector(selector);
sortBySelect.value = value;
sortBySelect.dispatchEvent(new Event("change"));
}
function click(selector) {
var sortBySelect = document.querySelector(selector);
sortBySelect.dispatchEvent(new Event("click"));
}
function test() {
change("select#MySelect", 19);
click("button#MyButton");
click("a#MyLink");
}
In my case, where the elements were created by vue, this is the only way that works.
As the other answers suggest pprint module does the trick.
Nonetheless, in case of debugging where you might need to put the entire list into some log file, one might have to use pformat method along with module logging along with pprint.
import logging
from pprint import pformat
logger = logging.getLogger('newlogger')
handler = logging.FileHandler('newlogger.log')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.WARNING)
data = [ (i, { '1':'one',
'2':'two',
'3':'three',
'4':'four',
'5':'five',
'6':'six',
'7':'seven',
'8':'eight',
})
for i in xrange(3)
]
logger.error(pformat(data))
And if you need to directly log it to a File, one would have to specify an output stream, using the stream keyword. Ref
from pprint import pprint
with open('output.txt', 'wt') as out:
pprint(myTree, stream=out)
This might not directly answer your question but for the sake of those that come with states like the below
state = {
currentstate:[
{
id: 1 ,
firstname: 'zinani',
sex: 'male'
}
]
}
Solution
const new_value = {
id: 2 ,
firstname: 'san',
sex: 'male'
}
Replace the current state with the new value
this.setState({ currentState: [...this.state.currentState, new_array] })
Updated answer for how to find which version of Swift your project is using in a few click in Xcode 12 to help out rookies like me.
I can recommend make pre-init of future index value, this is very usefull in a lot of case like multi work, some export e.t.c.
just create additional User_Seq
table:
with two fields: id Uniq index
and SeqVal nvarchar(1)
and create next SP, and generated ID value from this SP and put to new User row!
CREATE procedure [dbo].[User_NextValue]
as
begin
set NOCOUNT ON
declare @existingId int = (select isnull(max(UserId)+1, 0) from dbo.User)
insert into User_Seq (SeqVal) values ('a')
declare @NewSeqValue int = scope_identity()
if @existingId > @NewSeqValue
begin
set identity_insert User_Seq on
insert into User_Seq (SeqID) values (@existingId)
set @NewSeqValue = scope_identity()
end
delete from User_Seq WITH (READPAST)
return @NewSeqValue
end
One way would be store the current colour for each row within the model. Here's a simple model that is fixed at 3 columns and 3 rows:
static class MyTableModel extends DefaultTableModel {
List<Color> rowColours = Arrays.asList(
Color.RED,
Color.GREEN,
Color.CYAN
);
public void setRowColour(int row, Color c) {
rowColours.set(row, c);
fireTableRowsUpdated(row, row);
}
public Color getRowColour(int row) {
return rowColours.get(row);
}
@Override
public int getRowCount() {
return 3;
}
@Override
public int getColumnCount() {
return 3;
}
@Override
public Object getValueAt(int row, int column) {
return String.format("%d %d", row, column);
}
}
Note that setRowColour
calls fireTableRowsUpdated
; this will cause just that row of the table to be updated.
The renderer can get the model from the table:
static class MyTableCellRenderer extends DefaultTableCellRenderer {
@Override
public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) {
MyTableModel model = (MyTableModel) table.getModel();
Component c = super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column);
c.setBackground(model.getRowColour(row));
return c;
}
}
Changing a row's colour would be as simple as:
model.setRowColour(1, Color.YELLOW);
You can modify your REST project, so as to produce the needed static documents (html, pdf etc) upon building the project.
If you have a Java Maven project you can use the pom snippet below. It uses a series of plugins to generate a pdf and an html documentation (of the project's REST resources).
Please be aware that the order of execution matters, since the output of one plugin, becomes the input to the next:
<plugin>
<groupId>com.github.kongchen</groupId>
<artifactId>swagger-maven-plugin</artifactId>
<version>3.1.3</version>
<configuration>
<apiSources>
<apiSource>
<springmvc>false</springmvc>
<locations>some.package</locations>
<basePath>/api</basePath>
<info>
<title>Put your REST service's name here</title>
<description>Add some description</description>
<version>v1</version>
</info>
<swaggerDirectory>${project.build.directory}/api</swaggerDirectory>
<attachSwaggerArtifact>true</attachSwaggerArtifact>
</apiSource>
</apiSources>
</configuration>
<executions>
<execution>
<phase>${phase.generate-documentation}</phase>
<!-- fx process-classes phase -->
<goals>
<goal>generate</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>io.github.robwin</groupId>
<artifactId>swagger2markup-maven-plugin</artifactId>
<version>0.9.3</version>
<configuration>
<inputDirectory>${project.build.directory}/api</inputDirectory>
<outputDirectory>${generated.asciidoc.directory}</outputDirectory>
<!-- specify location to place asciidoc files -->
<markupLanguage>asciidoc</markupLanguage>
</configuration>
<executions>
<execution>
<phase>${phase.generate-documentation}</phase>
<goals>
<goal>process-swagger</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>1.5.3</version>
<dependencies>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj-pdf</artifactId>
<version>1.5.0-alpha.11</version>
</dependency>
<dependency>
<groupId>org.jruby</groupId>
<artifactId>jruby-complete</artifactId>
<version>1.7.21</version>
</dependency>
</dependencies>
<configuration>
<sourceDirectory>${asciidoctor.input.directory}</sourceDirectory>
<!-- You will need to create an .adoc file. This is the input to this plugin -->
<sourceDocumentName>swagger.adoc</sourceDocumentName>
<attributes>
<doctype>book</doctype>
<toc>left</toc>
<toclevels>2</toclevels>
<generated>${generated.asciidoc.directory}</generated>
<!-- this path is referenced in swagger.adoc file. The given file will simply
point to the previously create adoc files/assemble them. -->
</attributes>
</configuration>
<executions>
<execution>
<id>asciidoc-to-html</id>
<phase>${phase.generate-documentation}</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<backend>html5</backend>
<outputDirectory>${generated.html.directory}</outputDirectory>
<!-- specify location to place html file -->
</configuration>
</execution>
<execution>
<id>asciidoc-to-pdf</id>
<phase>${phase.generate-documentation}</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<backend>pdf</backend>
<outputDirectory>${generated.pdf.directory}</outputDirectory>
<!-- specify location to place pdf file -->
</configuration>
</execution>
</executions>
</plugin>
The asciidoctor plugin assumes the existence of an .adoc file to work on. You can create one that simply collects the ones that were created by the swagger2markup plugin:
include::{generated}/overview.adoc[]
include::{generated}/paths.adoc[]
include::{generated}/definitions.adoc[]
If you want your generated html document to become part of your war file you have to make sure that it is present on the top level - static files in the WEB-INF folder will not be served. You can do this in the maven-war-plugin:
<plugin>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<warSourceDirectory>WebContent</warSourceDirectory>
<failOnMissingWebXml>false</failOnMissingWebXml>
<webResources>
<resource>
<directory>${generated.html.directory}</directory>
<!-- Add swagger.pdf to WAR file, so as to make it available as static content. -->
</resource>
<resource>
<directory>${generated.pdf.directory}</directory>
<!-- Add swagger.html to WAR file, so as to make it available as static content. -->
</resource>
</webResources>
</configuration>
</plugin>
The war plugin works on the generated documentation - as such, you must make sure that those plugins have been executed in an earlier phase.
A HashMap can hold any object as a value, even if it is another HashMap. Eclipse is suggesting that you declare the types because that is the recommended practice for Collections. under Java 5. You are free to ignore Eclipse's suggestions.
Under Java 5, an int (or any primitive type) will be autoboxed into an Integer (or other corresponding type) when you add it to a collection. Be careful with this though, as there are some catches to using autoboxing.
Just try with
$("._statusDDL").val("2");
and not with
$("._statusDDL").val(2);
I researched a bit into SVG webfonts and font creation, in my eyes if you want to "add" fonts to font-awesome already existing font you need to do the following:
go to inkscape and create a glyph, save it as SVG so you could read the code, make sure to assign it a unicode character which is not currently used so it will not conflict with any of the existing glyphs in the font. this could be hard so i think a better simpler approad would be replacing an existing glyph with your own (just choose one you dont use, copy the first part:
Save the svg then convert it to web-font using any online converter so your webfont would work in all browsers.
once this is done, you should have either an SVG with one of the glyphs replaced in which case youre done. if not you need to create the CSS for your new glyph, in this case try and reuse FAs existing CSS, and only add
>##CSS##
>.FA.NEW-GLYPH:after {
>content:'WHATEVER AVAILABLE UNICODE CHARACTER YOU FOUND'
>(other conditions should be copied from other fonts)
>}
I found myself requiring this functionality often enough that I packaged it into a library called std-pour. It should let you execute a command and view the output in real time. To install simply:
npm install std-pour
Then it's simple enough to execute a command and see the output in realtime:
const { pour } = require('std-pour');
pour('ping', ['8.8.8.8', '-c', '4']).then(code => console.log(`Error Code: ${code}`));
It's promised based so you can chain multiple commands. It's even function signature-compatible with child_process.spawn
so it should be a drop in replacement anywhere you're using it.
First of all, that is the resolution you would want to use, 1650,1080
.
Now add:
frame.setExtendedState(JFrame.MAXIMIZED_BOTH);
If you have issues with the components on the JFrame, then after you have added all the components using the frame.add(component)
method, add the following statement.
frame.pack();
For Leaflet, I'm using
map.setView(markersLayer.getBounds().getCenter());
You can do:
function has_dupes($array) {
$dupe_array = array();
foreach ($array as $val) {
if (++$dupe_array[$val] > 1) {
return true;
}
}
return false;
}
this post was made a while ago, but it provides an answer that did not solve the problem regarding reaching the limit of requests in an iteration for me, so I publish this, to help who else has not served.
My environment happened in Ionic 3.
Instead of making a "pause" in the iteration, I ocurred the idea of ??iterating with a timer
, this timer has the particularity of executing the code that would go in the iteration, but will run every so often until it is reached the maximum count of the "Array" in which we want to iterate.
In other words, we will consult the Google API in a certain time so that it does not exceed the limit allowed in milliseconds.
// Code to start the timer
this.count= 0;
let loading = this.loadingCtrl.create({
content: 'Buscando los mejores servicios...'
});
loading.present();
this.interval = setInterval(() => this.getDistancias(loading), 40);
// Function that runs the timer, that is, query Google API
getDistancias(loading){
if(this.count>= this.datos.length){
clearInterval(this.interval);
} else {
var sucursal = this.datos[this.count];
this.calcularDistancia(this.posicion, new LatLng(parseFloat(sucursal.position.latitude),parseFloat(sucursal.position.longitude)),sucursal.codigo).then(distancia => {
}).catch(error => {
console.log('error');
console.log(error);
});
}
this.count += 1;
}
calcularDistancia(miPosicion, markerPosicion, codigo){
return new Promise(async (resolve,reject) => {
var service = new google.maps.DistanceMatrixService;
var distance;
var duration;
service.getDistanceMatrix({
origins: [miPosicion, 'salida'],
destinations: [markerPosicion, 'llegada'],
travelMode: 'DRIVING',
unitSystem: google.maps.UnitSystem.METRIC,
avoidHighways: false,
avoidTolls: false
}, function(response, status){
if (status == 'OK') {
var originList = response.originAddresses;
var destinationList = response.destinationAddresses;
try{
if(response != null && response != undefined){
distance = response.rows[0].elements[0].distance.value;
duration = response.rows[0].elements[0].duration.text;
resolve(distance);
}
}catch(error){
console.log("ERROR GOOGLE");
console.log(status);
}
}
});
});
}
I hope this helps!
I'm sorry for my English, I hope it's not an inconvenience, I had to use the Google translator.
Regards, Leandro.
info = [];
info[0] = 'hi';
info[1] = 'hello';
$.ajax({
type: "POST",
data: {info:info},
url: "index.php",
success: function(msg){
$('.answer').html(msg);
}
});
when you invoke a function , it is termed 'calling' a function . For eg , suppose you've defined a function that finds the average of two numbers like this-
def avgg(a,b) :
return (a+b)/2;
now, to call the function , you do like this .
x=avgg(4,6)
print x
value of x will be 5 .
I too had the same issue when I was trying to clone a repository on my Windows 7 machine. I tried most of the answers mentioned here. None of them worked for me.
What worked for me was, running the Pageant (Putty authentication agent) program. Once the Pageant was running in the background I was able to clone, push & pull from/to the repository. This worked for me, may be because I've setup my public key such that whenever it is used for the first time a password is required & the Pageant starts up.
You should always avoid using List<T>
as a parameter. Not only because this pattern reduces the opportunities of the caller to store the data in a different kind of collection, but also the caller has to convert the data into a List
first.
Converting an IEnumerable
into a List
costs O(n) complexity which is absolutely unneccessary. And it also creates a new object.
TL;DR you should always use a proper interface like IEnumerable
or IQueryable
based on what do you want to do with your collection. ;)
In your case:
public void foo(IEnumerable<DateTime> dateTimes)
{
}
From the PHP manual:
The size of an integer is platform-dependent, although a maximum value of about two billion is the usual value (that's 32 bits signed). PHP does not support unsigned integers. Integer size can be determined using the constant PHP_INT_SIZE, and maximum value using the constant PHP_INT_MAX since PHP 4.4.0 and PHP 5.0.5.
64-bit platforms usually have a maximum value of about 9E18, except on Windows prior to PHP 7, where it was always 32 bit.
As we know the height of a heap is log(n), where n is the total number of elements.Lets represent it as h
When we perform heapify operation, then the elements at last level(h) won't move even a single step.
The number of elements at second last level(h-1) is 2h-1 and they can move at max 1 level(during heapify).
Similarly, for the ith, level we have 2i elements which can move h-i positions.
Therefore total number of moves=S= 2h*0+2h-1*1+2h-2*2+...20*h
S=2h {1/2 + 2/22 + 3/23+ ... h/2h} -------------------------------------------------1
this is AGP series, to solve this divide both sides by 2
S/2=2h {1/22 + 2/23+ ... h/2h+1} -------------------------------------------------2
subtracting equation 2 from 1 gives
S/2=2h {1/2+1/22 + 1/23+ ...+1/2h+ h/2h+1}
S=2h+1 {1/2+1/22 + 1/23+ ...+1/2h+ h/2h+1}
now 1/2+1/22 + 1/23+ ...+1/2h is decreasing GP whose sum is less than 1 (when h tends to infinity, the sum tends to 1). In further analysis, let's take an upper bound on the sum which is 1.
This gives S=2h+1{1+h/2h+1}
=2h+1+h
~2h+h
as h=log(n), 2h=n
Therefore S=n+log(n)
T(C)=O(n)
For the line breaks i edited your code to get something with no line breaks.
#!/bin/bash
for i in /Users/anthonykiggundu/Sites/rku-it/*; do
t=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M" "$i")
echo $t : "${i##*/}" # t only contains date last modified, then only filename 'grokked'- else $i alone is abs. path
done
I am amazed by the haphazardness of all of the solutions posted so far.
The one and only proper way to get the root folder of a C# project is to leverage the [CallerFilePath]
attribute to obtain the full path name of a source file, and then subtract the filename plus extension from it, leaving you with the path to the project.
Here is how to actually do it:
In the root folder of your project, add a file with the following content:
internal static class ProjectSourcePath
{
private const string myRelativePath = nameof(ProjectSourcePath) + ".cs";
private static string? lazyValue;
public static string Value => lazyValue ??= calculatePath();
private static string calculatePath()
{
string pathName = GetSourceFilePathName();
Assert( pathName.EndsWith( myRelativePath, Sys.StringComparison.Ordinal ) );
return pathName.Substring( 0, pathName.Length - myRelativePath.Length ) );
}
}
The string?
requires a pretty late version of C# with #nullable enable
; if you don't have it, then just remove the ?
.
The Assert()
function is my own, replace it with your own.
The function GetSourceFilePathName()
is defined as follows:
using System.Runtime.CompilerServices
public static string GetSourceFilePathName( [CallerFilePath] string? callerFilePath = null ) //
=> callerFilePath ?? "";
Once you have this, you can use it as follows:
string projectSourcePath = ProjectSourcePath.Value;
My experience with Firefox is that adding the 'id
' attribute to a video element causes Firefox to crash completely...as in asking you to submit a bug report. Remove the id
element and it works fine. I am not sure if this is true for everyone, but I thought I'd share my experience in case it helps.
Removing any and all whitespace:
foo = ''.join(foo.split())
Removing last three characters:
foo = foo[:-3]
Converting to capital letters:
foo = foo.upper()
All of that code in one line:
foo = ''.join(foo.split())[:-3].upper()
Case: If you need to ignore the merge commit created by default, follow these steps.
Say, a new feature branch is checked out from master having 2 commits already,
Checkout a new feature_branch
Feature branch then adds two commits-->
Now if you want to merge feature_branch changes to master, Do git merge feature_branch
sitting on the master.
This will add all commits into master branch (4 in master + 2 in feature_branch = total 6) + an extra merge commit something like 'Merge branch 'feature_branch'
' as the master is diverged.
If you really need to ignore these commits (those made in FB) and add the whole changes made in feature_branch as a single commit like 'Integrated feature branch changes into master'
, Run git merge feature_merge --no-commit
.
With --no-commit, it perform the merge and stop just before creating a merge commit, We will have all the added changes in feature branch now in master and get a chance to create a new commit as our own.
Read here for more : https://git-scm.com/docs/git-merge
First parse the created_at field as Carbon object.
$createdAt = Carbon::parse($item['created_at']);
Then you can use
$suborder['payment_date'] = $createdAt->format('M d Y');
If you are in Linux, set JAVA_HOME using syntax export JAVA_HOME=<path-to-java>
. Actually it is not only for Maven.
If the string you're pulling in happens to be a hex number such as E01, then Excel will translate it as 0 even if you use the CStr function, and even if you first deposit it in a String variable type. One way around the issue is to append ' to the beginning of the value.
For example, when pulling values out of a Word table, and bringing them to Excel:
strWr = "'" & WorksheetFunction.Clean(.cell(iRow, iCol).Range.Text)
WebSockets is protocol that relies on TCP streamed connection. Although WebSockets is Message based protocol.
If you want to implement your own protocol then I recommend to use latest and stable specification (for 18/04/12) RFC 6455. This specification contains all necessary information regarding handshake and framing. As well most of description on scenarios of behaving from browser side as well as from server side. It is highly recommended to follow what recommendations tells regarding server side during implementing of your code.
In few words, I would describe working with WebSockets like this:
Create server Socket (System.Net.Sockets) bind it to specific port, and keep listening with asynchronous accepting of connections. Something like that:
Socket serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP); serverSocket.Bind(new IPEndPoint(IPAddress.Any, 8080)); serverSocket.Listen(128); serverSocket.BeginAccept(null, 0, OnAccept, null);
You should have accepting function "OnAccept" that will implement handshake. In future it has to be in another thread if system is meant to handle huge amount of connections per second.
private void OnAccept(IAsyncResult result) { try { Socket client = null; if (serverSocket != null && serverSocket.IsBound) { client = serverSocket.EndAccept(result); } if (client != null) { /* Handshaking and managing ClientSocket */ } } catch(SocketException exception) { } finally { if (serverSocket != null && serverSocket.IsBound) { serverSocket.BeginAccept(null, 0, OnAccept, null); } } }
After connection established, you have to do handshake. Based on specification 1.3 Opening Handshake, after connection established you will receive basic HTTP request with some information. Example:
GET /chat HTTP/1.1 Host: server.example.com Upgrade: websocket Connection: Upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Origin: http://example.com Sec-WebSocket-Protocol: chat, superchat Sec-WebSocket-Version: 13
This example is based on version of protocol 13. Bear in mind that older versions have some differences but mostly latest versions are cross-compatible. Different browsers may send you some additional data. For example Browser and OS details, cache and others.
Based on provided handshake details, you have to generate answer lines, they are mostly same, but will contain Accpet-Key, that is based on provided Sec-WebSocket-Key. In specification 1.3 it is described clearly how to generate response key. Here is my function I've been using for V13:
static private string guid = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"; private string AcceptKey(ref string key) { string longKey = key + guid; SHA1 sha1 = SHA1CryptoServiceProvider.Create(); byte[] hashBytes = sha1.ComputeHash(System.Text.Encoding.ASCII.GetBytes(longKey)); return Convert.ToBase64String(hashBytes); }
Handshake answer looks like that:
HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
But accept key have to be the generated one based on provided key from client and method AcceptKey I provided before. As well, make sure after last character of accept key you put two new lines "\r\n\r\n".
Implementing own WebSockets protocol definitely have some benefits and great experience you get as well as control over protocol it self. But you have to spend some time doing it, and make sure that implementation is highly reliable.
In same time you might have a look in ready to use solutions that google (again) have enough.
In my work we not use macros. So the solution provide by @TomSwift inspired to me. I see the implementation for CGRectMake and create the same CGRectSetPos but without macros.
CG_INLINE CGRect
CGRectSetPos(CGRect frame, CGFloat x, CGFloat y)
{
CGRect rect;
rect.origin.x = x; rect.origin.y = y;
rect.size.width = frame.size.width; rect.size.height = frame.size.height;
return rect;
}
To use I only put frame, X and Y
viewcontroller.view.frame = CGRectSetPos(viewcontroller.view.frame, 100, 100);
Work for me ^_^
You need to specify all of the names, including those already registered.
I used the following command originally to register some certificates:
/opt/certbot/certbot-auto certonly --webroot --agree-tos -w /srv/www/letsencrypt/ \
--email [email protected] \
--expand -d example.com,www.example.com
... and just now I successfully used the following command to expand my registration to include a new subdomain as a SAN:
/opt/certbot/certbot-auto certonly --webroot --agree-tos -w /srv/www/letsencrypt/ \
--expand -d example.com,www.example.com,click.example.com
From the documentation:
--expand "If an existing cert covers some subset of the requested names, always expand and replace it with the additional names."
Don't forget to restart the server to load the new certificates if you are running nginx.
Let ? (radius) and f (azimuth) be two random variables corresponding to polar coordinates of an arbitrary point inside the circle. If the points are uniformly distributed then what is the disribution function of ? and f?
For any r: 0 < r < R the probability of radius coordinate ? to be less then r is
P[? < r] = P[point is within a circle of radius r] = S1 / S0 =(r/R)2
Where S1 and S0 are the areas of circle of radius r and R respectively. So the CDF can be given as:
0 if r<=0
CDF = (r/R)**2 if 0 < r <= R
1 if r > R
And PDF:
PDF = d/dr(CDF) = 2 * (r/R**2) (0 < r <= R).
Note that for R=1 random variable sqrt(X) where X is uniform on [0, 1) has this exact CDF (because P[sqrt(X) < y] = P[x < y**2] = y**2 for 0 < y <= 1).
The distribution of f is obviously uniform from 0 to 2*p. Now you can create random polar coordinates and convert them to Cartesian using trigonometric equations:
x = ? * cos(f)
y = ? * sin(f)
Can't resist to post python code for R=1.
from matplotlib import pyplot as plt
import numpy as np
rho = np.sqrt(np.random.uniform(0, 1, 5000))
phi = np.random.uniform(0, 2*np.pi, 5000)
x = rho * np.cos(phi)
y = rho * np.sin(phi)
plt.scatter(x, y, s = 4)
You will get
The following code works for me:
Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(cameraIntent, 2);
And here is the result:
protected void onActivityResult(int requestCode, int resultCode, Intent imageReturnedIntent)
{
super.onActivityResult(requestCode, resultCode, imageReturnedIntent);
if(resultCode == RESULT_OK)
{
Uri selectedImage = imageReturnedIntent.getData();
ImageView photo = (ImageView) findViewById(R.id.add_contact_label_photo);
Bitmap mBitmap = null;
try
{
mBitmap = Media.getBitmap(this.getContentResolver(), selectedImage);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
Just make sure the label is associated with the input.
<fieldset>
<legend>What metasyntactic variables do you like?</legend>
<input type="checkbox" name="foo" value="bar" id="foo_bar">
<label for="foo_bar">Bar</label>
<input type="checkbox" name="foo" value="baz" id="foo_baz">
<label for="foo_baz">Baz</label>
</fieldset>
I found the following snippet while reading the source for
Tempfile#initialize
in the Ruby core library:begin tmpname = File.join(tmpdir, make_tmpname(basename, n)) lock = tmpname + '.lock' n += 1 end while @@cleanlist.include?(tmpname) or File.exist?(lock) or File.exist?(tmpname)
At first glance, I assumed the while modifier would be evaluated before the contents of begin...end, but that is not the case. Observe:
>> begin ?> puts "do {} while ()" >> end while false do {} while () => nil
As you would expect, the loop will continue to execute while the modifier is true.
>> n = 3 => 3 >> begin ?> puts n >> n -= 1 >> end while n > 0 3 2 1 => nil
While I would be happy to never see this idiom again, begin...end is quite powerful. The following is a common idiom to memoize a one-liner method with no params:
def expensive @expensive ||= 2 + 2 end
Here is an ugly, but quick way to memoize something more complex:
def expensive @expensive ||= begin n = 99 buf = "" begin buf << "#{n} bottles of beer on the wall\n" # ... n -= 1 end while n > 0 buf << "no more bottles of beer" end end
Originally written by Jeremy Voorhis. The content has been copied here because it seems to have been taken down from the originating site. Copies can also be found in the Web Archive and at Ruby Buzz Forum. -Bill the Lizard
Try Pulley:
Pulley is an easy to use drawer library meant to imitate the drawer in iOS 10's Maps app. It exposes a simple API that allows you to use any UIViewController subclass as the drawer content or the primary content.
Yes, but its generally a very bad idea to force another thread to interrupt on a random line of code. You would only do this if you intend to shutdown the process.
What you can do is to use Thread.interrupt()
for a task after a certain amount of time. However, unless the code checks for this it won't work. An ExecutorService can make this easier with Future.cancel(true)
Its much better for the code to time itself and stop when it needs to.
In above code variable "ver" is assign to null, print "ver" before returning and see the value. As this "ver" having null service is send status as "204 No Content".
And about status code "405 - Method Not Allowed" will get this status code when rest controller or service only supporting GET method but from client side your trying with POST with valid uri request, during such scenario get status as "405 - Method Not Allowed"
Not so much a weird feature, but one that's really irritating from a type-safety point of view: array covariance in C#.
class Foo { }
class Bar : Foo { }
class Baz : Foo { }
Foo[] foo = new Bar[1];
foo[0] = new Baz(); // Oh snap!
This was inherited (pun intentional) from Java, I believe.
There's also ShortGuid - A shorter and url friendly GUID class in C#. It's available as a Nuget. More information here.
PM> Install-Package CSharpVitamins.ShortGuid
Usage:
Guid guid = Guid.NewGuid();
ShortGuid sguid1 = guid; // implicitly cast the guid as a shortguid
Console.WriteLine(sguid1);
Console.WriteLine(sguid1.Guid);
This produces a new guid, uses that guid to create a ShortGuid, and displays the two equivalent values in the console. Results would be something along the lines of:
ShortGuid: FEx1sZbSD0ugmgMAF_RGHw
Guid: b1754c14-d296-4b0f-a09a-030017f4461f
there is something else that cause this error and it is when you do not add return keyword in front of res.send, res.json, etc...
My problem was that in httpd.conf the DocumentRoot
and <Directory>
entries were pointing to non-existing folders.
For example, the 'original' httpd.conf had the following entries:
DocumentRoot "c:/Apache24/htdocs"
<Directory "c:/Apache24/htdocs">
If you've installed in C:\xampp then you need to change those entries to match, i.e.
DocumentRoot "c:/xampp/htdocs"
<Directory "c:/xampp/htdocs">
There are different Names of SD-Cards.
This Code check every possible Name (I don't guarantee that these are all names but the most are included)
It prefers the main storage.
private String SDPath() {
String sdcardpath = "";
//Datas
if (new File("/data/sdext4/").exists() && new File("/data/sdext4/").canRead()){
sdcardpath = "/data/sdext4/";
}
if (new File("/data/sdext3/").exists() && new File("/data/sdext3/").canRead()){
sdcardpath = "/data/sdext3/";
}
if (new File("/data/sdext2/").exists() && new File("/data/sdext2/").canRead()){
sdcardpath = "/data/sdext2/";
}
if (new File("/data/sdext1/").exists() && new File("/data/sdext1/").canRead()){
sdcardpath = "/data/sdext1/";
}
if (new File("/data/sdext/").exists() && new File("/data/sdext/").canRead()){
sdcardpath = "/data/sdext/";
}
//MNTS
if (new File("mnt/sdcard/external_sd/").exists() && new File("mnt/sdcard/external_sd/").canRead()){
sdcardpath = "mnt/sdcard/external_sd/";
}
if (new File("mnt/extsdcard/").exists() && new File("mnt/extsdcard/").canRead()){
sdcardpath = "mnt/extsdcard/";
}
if (new File("mnt/external_sd/").exists() && new File("mnt/external_sd/").canRead()){
sdcardpath = "mnt/external_sd/";
}
if (new File("mnt/emmc/").exists() && new File("mnt/emmc/").canRead()){
sdcardpath = "mnt/emmc/";
}
if (new File("mnt/sdcard0/").exists() && new File("mnt/sdcard0/").canRead()){
sdcardpath = "mnt/sdcard0/";
}
if (new File("mnt/sdcard1/").exists() && new File("mnt/sdcard1/").canRead()){
sdcardpath = "mnt/sdcard1/";
}
if (new File("mnt/sdcard/").exists() && new File("mnt/sdcard/").canRead()){
sdcardpath = "mnt/sdcard/";
}
//Storages
if (new File("/storage/removable/sdcard1/").exists() && new File("/storage/removable/sdcard1/").canRead()){
sdcardpath = "/storage/removable/sdcard1/";
}
if (new File("/storage/external_SD/").exists() && new File("/storage/external_SD/").canRead()){
sdcardpath = "/storage/external_SD/";
}
if (new File("/storage/ext_sd/").exists() && new File("/storage/ext_sd/").canRead()){
sdcardpath = "/storage/ext_sd/";
}
if (new File("/storage/sdcard1/").exists() && new File("/storage/sdcard1/").canRead()){
sdcardpath = "/storage/sdcard1/";
}
if (new File("/storage/sdcard0/").exists() && new File("/storage/sdcard0/").canRead()){
sdcardpath = "/storage/sdcard0/";
}
if (new File("/storage/sdcard/").exists() && new File("/storage/sdcard/").canRead()){
sdcardpath = "/storage/sdcard/";
}
if (sdcardpath.contentEquals("")){
sdcardpath = Environment.getExternalStorageDirectory().getAbsolutePath();
}
Log.v("SDFinder","Path: " + sdcardpath);
return sdcardpath;
}
The important part is this:
Cannot find class [com.rakuten.points.persistence.manager.MemberPointSummaryDAOImpl] for bean with name 'MemberPointSummaryDAOImpl' defined in ServletContext resource [/WEB-INF/context/PersistenceManagerContext.xml];
due to:
nested exception is java.lang.ClassNotFoundException: com.rakuten.points.persistence.manager.MemberPointSummaryDAOImpl
According to this log, Spring could not find your MemberPointSummaryDAOImpl
class.
You can change the comment character to something besides # like this:
git config --global core.commentchar "@"
the input does not have to be a list of records - it can be a single dictionary as well:
pd.DataFrame.from_records({'a':1,'b':2}, index=[0])
a b
0 1 2
Which seems to be equivalent to:
pd.DataFrame({'a':1,'b':2}, index=[0])
a b
0 1 2
I think you may have installed the version of mongodb for the wrong system distro.
Take a look at how to install mongodb for ubuntu and debian:
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/ http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
I had a similar problem, and what happened was that I was installing the ubuntu packages in debian
One consideration is that FTP can use non-standard ports, which can make getting though firewalls difficult (especially if you're using SSL). HTTP is typically on a known port, so this is rarely a problem.
If you do decide to use FTP, make sure you read about Active and Passive FTP.
In terms of performance, at the end of the day they're both spewing files directly down TCP connections so should be about the same.
You can use this on Android. Works fine for me.
private static final Pattern localeMatcher = Pattern.compile
("^([^_]*)(_([^_]*)(_#(.*))?)?$");
public static Locale parseLocale(String value) {
Matcher matcher = localeMatcher.matcher(value.replace('-', '_'));
return matcher.find()
? TextUtils.isEmpty(matcher.group(5))
? TextUtils.isEmpty(matcher.group(3))
? TextUtils.isEmpty(matcher.group(1))
? null
: new Locale(matcher.group(1))
: new Locale(matcher.group(1), matcher.group(3))
: new Locale(matcher.group(1), matcher.group(3),
matcher.group(5))
: null;
}
In regards to the question in your comment:
Assuming that you've previously bound your function to the click event of the radio button, add this to your $(document).ready
function:
$('#[radioButtonOptionID]').click()
Without a parameter, that simulates the click event.
This is round robin DNS. This is a quite simple solution for load balancing. Usually DNS servers rotate/shuffle the DNS records for each incoming DNS request. Unfortunately it's not a real solution for fail-over. If one of the servers fail, some visitors will still be directed to this failed server.
Try this code:
private void RegisterInStartup(bool isChecked)
{
RegistryKey registryKey = Registry.CurrentUser.OpenSubKey
("SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run", true);
if (isChecked)
{
registryKey.SetValue("ApplicationName", Application.ExecutablePath);
}
else
{
registryKey.DeleteValue("ApplicationName");
}
}
Source (dead): http://www.dotnetthoughts.net/2010/09/26/run-the-application-at-windows-startup/
Archived link: https://web.archive.org/web/20110104113608/http://www.dotnetthoughts.net/2010/09/26/run-the-application-at-windows-startup/
The simplest code would be like, keep your properties files into resources folder, either in src/main/resource or in src/test/resource. Then use below code to read properties files:
public class Utilities {
static {
rb1 = ResourceBundle.getBundle("fileNameWithoutExtension");
// do not use .properties extension
}
public static String getConfigProperties(String keyString) {
return rb1.getString(keyString);
}
}
Some timing tests for cpython 3 shows that a simple for loop is the fastest way, and it's quite readable. Adding in a function doesn't cause much overhead either:
timeit results (10k iterations):
all(x.pop(v) for v in r) # 0.85
all(map(x.pop, r)) # 0.60
list(map(x.pop, r)) # 0.70
all(map(x.__delitem__, r)) # 0.44
del_all(x, r) # 0.40
<inline for loop>(x, r) # 0.35
def del_all(mapping, to_remove):
"""Remove list of elements from mapping."""
for key in to_remove:
del mapping[key]
For small iterations, doing that 'inline' was a bit faster, because of the overhead of the function call. But del_all
is lint-safe, reusable, and faster than all the python comprehension and mapping constructs.
Python allows you to use a string as an iterator:
for character in 'string':
print(character)
I'm guessing it's your job to figure out how to turn that into a while loop.
The answers are commonly found in Java books.
cloning: If you don't override clone method, the default behavior is shallow copy. If your objects have only primitive member variables, it's totally ok. But in a typeless language with another object as member variables, it's a headache.
serialization/deserialization
$new_object = unserialize(serialize($your_object))
This achieves deep copy with a heavy cost depending on the complexity of the object.
I found that the above did not work for me. I was pulling a cell value from a JTable but could not cast to double or int etc. My solution:
Object obj = getTable().getValueAt(row, 0);
where row 0 would always be a number. Hope this helps anyone still scrolling!
You can use the ThenBy and ThenByDescending extension methods:
foobarList.OrderBy(x => x.Foo).ThenBy( x => x.Bar)
I don't know about others, but I was used to define a "global constant" (DEBUG
) and then a global function (debug(msg)
) that would print msg
only if DEBUG == True
.
Then I write my debug statements like:
debug('My value: %d' % value)
...then I pick up unit testing and never did this again! :)
There are two ways to sum of a column
dataset = pd.read_csv("data.csv")
1: sum(dataset.Column_name)
2: dataset['Column_Name'].sum()
If there is any issue in this the please correct me..
In PowerShell 2.0, it is still not possible to get the Copy-Item cmdlet to create the destination folder, you'll need code like this:
$destinationFolder = "C:\My Stuff\Subdir"
if (!(Test-Path -path $destinationFolder)) {New-Item $destinationFolder -Type Directory}
Copy-Item "\\server1\Upgrade.exe" -Destination $destinationFolder
If you use -Recurse in the Copy-Item it will create all the subfolders of the source structure in the destination but it won't create the actual destination folder, even with -Force.
With MVC5 i have done it like this and it is 100% working code....
@Html.ActionLink(department.Name, "Index", "Employee", new {
departmentId = department.DepartmentID }, null)
You guys can get an idea from this...
If you are testing the server in localhost your Android device must be connected to the same local network. Then the Server URL used by your APP must include your computer IP Address and not the "localhost" mask.
Add the file to the index:
git add path/to/untracked-file
git stash
The entire contents of the index, plus any unstaged changes to existing files, will all make it into the stash.
It is called the Conditional Operator (which is a ternary operator).
It has the form of: condition
? value-if-true
: value-if-false
Think of the ?
as "then" and :
as "else".
Your code is equivalent to
if (max != 0)
hsb.s = 255 * delta / max;
else
hsb.s = 0;
Perhaps this will help:
List of XML and HTML character entity references:
In SGML, HTML and XML documents, the logical constructs known as character data and attribute values consist of sequences of characters, in which each character can manifest directly (representing itself), or can be represented by a series of characters called a character reference, of which there are two types: a numeric character reference and a character entity reference. This article lists the character entity references that are valid in HTML and XML documents.
That article lists the following five predefined XML entities:
quot "
amp &
apos '
lt <
gt >
I had the same problem and I Just Invalidate caches/restart
I see that Character.isDigit perfectly suits the need, since the input will be just one symbol. Of course we don't have any info about this kb object but just in case it's a java.util.Scanner instance, I'd also suggest using java.io.InputStreamReader for command line input. Here's an example:
java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader(System.in));
try {
reader.read();
}
catch(Exception e) {
e.printStackTrace();
}
reader.close();
The problem is
listModel.addElement(listaRosa.getSelectedValue());
listModel.removeElement(listaRosa.getSelectedValue());
you may be adding an element and immediatly removing it since both add and remove operations are on the same listModel.
Try
private void aggiungiTitolareButtonActionPerformed(java.awt.event.ActionEvent evt) {
DefaultListModel lm2 = (DefaultListModel) listaTitolari.getModel();
DefaultListModel lm1 = (DefaultListModel) listaRosa.getModel();
if(lm2 == null)
{
lm2 = new DefaultListModel();
listaTitolari.setModel(lm2);
}
lm2.addElement(listaTitolari.getSelectedValue());
lm1.removeElement(listaTitolari.getSelectedValue());
}
You can use the following open method to check if a file exists + readable:
file = open(inputFile, 'r')
file.close()
Another simple way to do this is by using append
which will allocate the slice in the process.
arr := []int{1, 2, 3}
tmp := append([]int(nil), arr...) // Notice the ... splat
fmt.Println(tmp)
fmt.Println(arr)
Output (as expected):
[1 2 3]
[1 2 3]
So a shorthand for copying array arr
would be append([]int(nil), arr...)
Log location:
${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log
Get log as a text and save to workspace:
cat ${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log >> log.txt
All of the solutions I've seen so far also hit on commented out lines. This one didn't, if the comment code is ;
:
awk -F '=' '{if (! ($0 ~ /^;/) && $0 ~ /database_version/) print $2}' file.ini
In my case I had to set delaysContentTouches
to true because the objects inside the scrollView were all capturing the touch events and handling themselves rather than letting the scrollView itself handle it.
Not only Inside methods, it can be used inside classes also.
class Calculator
{
public static int Sum(int x,int y) => x + y;
public static Func<int, int, int> Add = (x, y) => x + y;
public static Action<int,int> DisplaySum = (x, y) => Console.WriteLine(x + y);
}
One checkbox to rule them all
For people still looking for plugin to control checkboxes through one that's lightweight, has out-of-the-box support for UniformJS and iCheck and gets unchecked when at least one of controlled checkboxes is unchecked (and gets checked when all controlled checkboxes are checked of course) I've created a jQuery checkAll plugin.
Feel free to check the examples on documentation page.
For this question example all you need to do is:
$( "#checkAll" ).checkall({
target: "input:checkbox"
});
Isn't that clear and simple?
To extend @Dave's answer...if planRec.approved_by is already a string
this.approved_by = planRec.approved_by ?? "";
Try this:
location / {
root /path/to/root;
expires 30d;
access_log off;
}
location ~* ^.*\.php$ {
if (!-f $request_filename) {
return 404;
}
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080;
}
Hopefully it works. Regular expressions have higher priority than plain strings, so all requests ending in .php
should be forwared to Apache if only a corresponding .php
file exists. Rest will be handled as static files. The actual algorithm of evaluating location is here.
No, it doesn't, see: R Language Definition: Operators
If you are using VB, you need to drop the semicolon:
<% Response.Redirect("new.aspx", true) %>
If you are running same app into multiple ports where app uses single database (h2), then add AUTO_SERVER=TRUE
in the url as follows:
jdbc:h2:file:C:/simple-commerce/price;DB_CLOSE_ON_EXIT=FALSE;AUTO_RECONNECT=TRUE;AUTO_SERVER=TRUE
With Java 7's try-with-resources Jiri's answer can be improved upon:
try (BufferedReader br = new BufferedReader(new FileReader("foo.txt"))) {
String line = null;
while ((line = br.readLine()) != null) {
System.out.println(line);
}
}
Add exception handling at the place of your choice, either in this try
or elsewhere.
Your json isn't an array, it hasn't length property. You must change your data return or the way you get your data count.
I was looking for an answer on what |=
does in Groovy and although answers above are right on they did not help me understand a particular piece of code I was looking at.
In particular, when applied to a boolean variable "|=" will set it to TRUE the first time it encounters a truthy expression on the right side and will HOLD its TRUE value for all |= subsequent calls. Like a latch.
Here a simplified example of this:
groovy> boolean result
groovy> //------------
groovy> println result //<-- False by default
groovy> println result |= false
groovy> println result |= true //<-- set to True and latched on to it
groovy> println result |= false
Output:
false
false
true
true
Edit: Why is this useful?
Consider a situation where you want to know if anything has changed on a variety of objects and if so notify some one of the changes. So, you would setup a hasChanges
boolean and set it to |= diff (a,b)
and then |= dif(b,c)
etc.
Here is a brief example:
groovy> boolean hasChanges, a, b, c, d
groovy> diff = {x,y -> x!=y}
groovy> hasChanges |= diff(a,b)
groovy> hasChanges |= diff(b,c)
groovy> hasChanges |= diff(true,false)
groovy> hasChanges |= diff(c,d)
groovy> hasChanges
Result: true
Here is another method to get date
new Date().getDate() // Get the day as a number (1-31)
new Date().getDay() // Get the weekday as a number (0-6)
new Date().getFullYear() // Get the four digit year (yyyy)
new Date().getHours() // Get the hour (0-23)
new Date().getMilliseconds() // Get the milliseconds (0-999)
new Date().getMinutes() // Get the minutes (0-59)
new Date().getMonth() // Get the month (0-11)
new Date().getSeconds() // Get the seconds (0-59)
new Date().getTime() // Get the time (milliseconds since January 1, 1970)
You can retrieve it from the post object like so:
global $post;
$post->post_name;
Another option, not necesarily more elegant, but does not require to refer to a specific column:
mtcars %>%
group_by(cyl, gear) %>%
do(data.frame(nrow=nrow(.)))
<Stack.Screen
name="SignInScreen"
component={Screens.SignInScreen}
options={{ headerShown: false }}
/>
options={{ headerShown: false }}
works for me.
** "@react-navigation/native": "^5.0.7",
"@react-navigation/stack": "^5.0.8",
For people already using lodash
Most of these example are really good and cover a lot of cases. But if you 'know' that you only have English text, here's my version that's super easy to read :)
_.words(_.toLower(text)).join('-')
Always google so you can locate the latest package for both NPP and NPP Plugins.
I googled "notepad++ 64bit". Downloaded the free latest version at Notepad++ (64-bit) - Free download and software. Installed notepad++ by double-click on npp.?.?.?.Installer.x64.exe, installed the .exe to default Windows 64bit path which is, "C:\Program Files".
Then, I googled "notepad++ 64 json viewer plug". Knowing SourceForge.Net is a renowned download site, downloaded JSToolNpp [email protected]. I unzipped and copied JSMinNPP.dll to notePad++ root dir.
I loaded my newly installed notepad++ 64bit. I went to Settings and selected [import plug-in]. I pointed to the location of JSMinNPP.dll and clicked open.
I reloaded notepad++, went to PlugIns menu. To format one-line json string to multi-line json doc, I clicked JSTool->JSFormat or reverse multi-line json doc to one-line json string by JSTool->JSMin (json-Minified)!
you can stash the uncommitted changes using "git stash" then checkout to a new branch using "git checkout -b " then apply the stashed commits "git stash apply"
Canvas zoom and pan
<!DOCTYPE html>_x000D_
<html>_x000D_
<body>_x000D_
_x000D_
<canvas id="myCanvas" width="" height=""_x000D_
style="border:1px solid #d3d3d3;">_x000D_
Your browser does not support the canvas element._x000D_
</canvas>_x000D_
_x000D_
<script>_x000D_
console.log("canvas")_x000D_
var ox=0,oy=0,px=0,py=0,scx=1,scy=1;_x000D_
var canvas = document.getElementById("myCanvas");_x000D_
canvas.onmousedown=(e)=>{px=e.x;py=e.y;canvas.onmousemove=(e)=>{ox-=(e.x-px);oy-=(e.y-py);px=e.x;py=e.y;} } _x000D_
_x000D_
canvas.onmouseup=()=>{canvas.onmousemove=null;}_x000D_
canvas.onwheel =(e)=>{let bfzx,bfzy,afzx,afzy;[bfzx,bfzy]=StoW(e.x,e.y);scx-=10*scx/e.deltaY;scy-=10*scy/e.deltaY;_x000D_
[afzx,afzy]=StoW(e.x,e.y);_x000D_
ox+=(bfzx-afzx);_x000D_
oy+=(bfzy-afzy);_x000D_
}_x000D_
var ctx = canvas.getContext("2d");_x000D_
_x000D_
function draw(){_x000D_
window.requestAnimationFrame(draw);_x000D_
ctx.clearRect(0,0,canvas.width,canvas.height);_x000D_
for(let i=0;i<=100;i+=10){_x000D_
let sx=0,sy=i;_x000D_
let ex=100,ey=i;_x000D_
[sx,sy]=WtoS(sx,sy);_x000D_
[ex,ey]=WtoS(ex,ey);_x000D_
ctx.beginPath();_x000D_
ctx.moveTo(sx, sy);_x000D_
ctx.lineTo(ex, ey);_x000D_
ctx.stroke();_x000D_
}_x000D_
for(let i=0;i<=100;i+=10){_x000D_
let sx=i,sy=0;_x000D_
let ex=i,ey=100;_x000D_
[sx,sy]=WtoS(sx,sy);_x000D_
[ex,ey]=WtoS(ex,ey);_x000D_
ctx.beginPath();_x000D_
ctx.moveTo(sx, sy);_x000D_
ctx.lineTo(ex, ey);_x000D_
ctx.stroke();_x000D_
}_x000D_
}_x000D_
draw()_x000D_
function WtoS(wx,wy){_x000D_
let sx=(wx-ox)*scx;_x000D_
let sy=(wy-oy)*scy;_x000D_
return[sx,sy];_x000D_
}_x000D_
function StoW(sx,sy){_x000D_
let wx=sx/scx+ox;_x000D_
let wy=sy/scy+oy;_x000D_
return[wx,wy];_x000D_
}_x000D_
_x000D_
</script>_x000D_
_x000D_
</body>_x000D_
</html>
_x000D_
Just to add on information from another popular framework, Google Closures, see their dom/classes class:
goog.dom.classes.add(element, var_args)
goog.dom.classes.addRemove(element, classesToRemove, classesToAdd)
goog.dom.classes.remove(element, var_args)
One option for selecting the element is using goog.dom.query with a CSS3 selector:
var myElement = goog.dom.query("#MyElement")[0];
Navigate to your play page:
https://play.google.com/store/apps/details?id=com.yourpackage
Using a standard HTTP GET. Now the following jQuery finds important info for you:
$("[itemprop='softwareVersion']").text()
$(".recent-change").each(function() { all += $(this).text() + "\n"; })
Now that you can extract these information manually, simply make a method in your app that executes this for you.
public static String[] getAppVersionInfo(String playUrl) {
HtmlCleaner cleaner = new HtmlCleaner();
CleanerProperties props = cleaner.getProperties();
props.setAllowHtmlInsideAttributes(true);
props.setAllowMultiWordAttributes(true);
props.setRecognizeUnicodeChars(true);
props.setOmitComments(true);
try {
URL url = new URL(playUrl);
URLConnection conn = url.openConnection();
TagNode node = cleaner.clean(new InputStreamReader(conn.getInputStream()));
Object[] new_nodes = node.evaluateXPath("//*[@class='recent-change']");
Object[] version_nodes = node.evaluateXPath("//*[@itemprop='softwareVersion']");
String version = "", whatsNew = "";
for (Object new_node : new_nodes) {
TagNode info_node = (TagNode) new_node;
whatsNew += info_node.getAllChildren().get(0).toString().trim()
+ "\n";
}
if (version_nodes.length > 0) {
TagNode ver = (TagNode) version_nodes[0];
version = ver.getAllChildren().get(0).toString().trim();
}
return new String[]{version, whatsNew};
} catch (IOException | XPatherException e) {
e.printStackTrace();
return null;
}
}
Uses HtmlCleaner
You have to create the colors.xml
file in the res/values
folder of your project. The code of colors.xml
is
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="orange">#ff5500</color>
<color name="white">#ffffff</color>
<color name="transparent">#00000000</color>
<color name="date_color">#999999</color>
<color name="black">#000000</color>
<color name="gray">#999999</color>
<color name="blue">#0066cc</color>
<color name="gold">#e6b121</color>
<color name="blueback">#99FFFF</color>
<color name="articlecolor">#3399FF</color>
<color name="article_title">#3399FF</color>
<color name="cachecolor">#8ad0e8</color>
</resources>
Or, you can use Colors in your application by following way
android.graphics.Color.TRANSPARENT;
Similarly
android.graphics.Color.RED;
Although View.getVisibility() does get the visibility, its not a simple true/false. A view can have its visibility set to one of three things.
View.VISIBLE The view is visible.
View.INVISIBLE The view is invisible, but any spacing it would normally take up will still be used. Its "invisible"
View.GONE The view is gone, you can't see it and it doesn't take up the "spot".
So to answer your question, you're looking for:
if (myImageView.getVisibility() == View.VISIBLE) {
// Its visible
} else {
// Either gone or invisible
}
If the component is an EJB, then, there shouldn't be a problem injecting an EM.
But....In JBoss 5, the JAX-RS integration isn't great. If you have an EJB, you cannot use scanning and you must manually list in the context-param resteasy.jndi.resource. If you still have scanning on, Resteasy will scan for the resource class and register it as a vanilla JAX-RS service and handle the lifecycle.
This is probably the problem.
POST variables should be accessible via the request object: HttpRequest.getParameterMap(). The exception is if the form is sending multipart MIME data (the FORM has enctype="multipart/form-data"). In that case, you need to parse the byte stream with a MIME parser. You can write your own or use an existing one like the Apache Commons File Upload API.
const
in C++ does not mean that a value is a constant.
const
in C++ implies that the client of a contract undertakes not to alter its value.
Whether the value of a const
expression changes becomes more evident if you are in an environment which supports thread based concurrency.
As Java was designed from the start to support thread and lock concurrency, it didn't add to confusion by overloading the term to have the semantics that final
has.
eg:
#include <iostream>
int main ()
{
volatile const int x = 42;
std::cout << x << std::endl;
*const_cast<int*>(&x) = 7;
std::cout << x << std::endl;
return 0;
}
outputs 42 then 7.
Although x
marked as const
, as a non-const alias is created, x
is not a constant. Not every compiler requires volatile
for this behaviour (though every compiler is permitted to inline the constant)
With more complicated systems you get const/non-const aliases without use of const_cast
, so getting into the habit of thinking that const means something won't change becomes more and more dangerous. const
merely means that your code can't change it without a cast, not that the value is constant.
Your Activity
is extending ActionBarActivity
which requires the AppCompat.theme
to be applied.
Change from ActionBarActivity
to Activity
or FragmentActivity
, it will solve the problem.
Step 1: Open "Maven Projects"
Step 2: Select the project you want to import:
Solved my problem by adding this to my ListView
:
android:scrollbars="none"
See the 'non-fast forward' section of 'git push --help' for details.
You can perform "git pull", resolve potential conflicts, and "git push" the result. A "git pull" will create a merge commit C between commits A and B.
Alternatively, you can rebase your change between X and B on top of A, with "git pull --rebase", and push the result back. The rebase will create a new commit D that builds the change between X and B on top of A.
You can use the patch function for setting defaults with some of the values in your form group.
component.html
<form [formGroup]="countryForm">
<select id="country" formControlName="country">
<option *ngFor="let c of countries" [ngValue]="c">{{ c }}</option>
</select>
</form>
component.ts
import { FormControl, FormGroup, Validators } from '@angular/forms';
export class Component implements OnInit{
countries: string[] = ['USA', 'UK', 'Canada'];
default: string = 'UK';
countryForm: FormGroup;
constructor() {
this.countryForm.controls['country'].setValue(this.default, {onlySelf: true});
}
}
ngOnInit() {
this.countryForm = new FormGroup({
'country': new FormControl(null)
});
this.countryForm.patchValue({
'country': default
});
}
First try this: dont use the php composer.phar [parameters]
simply use composer [parameters]
if this doesn't work for you than try the rest. Hope it helps.
date_default_timezone_set('Asia/Kolkata');
$curDateTime = date("Y-m-d H:i:s");
$myDate = date("Y-m-d H:i:s", strtotime("2018-06-26 16:15:33"));
if($myDate < $curDateTime){
echo "active";exit;
}else{
echo "inactive";exit;
}
Initialize empty frame with column names
import pandas as pd
col_names = ['A', 'B', 'C']
my_df = pd.DataFrame(columns = col_names)
my_df
Add a new record to a frame
my_df.loc[len(my_df)] = [2, 4, 5]
You also might want to pass a dictionary:
my_dic = {'A':2, 'B':4, 'C':5}
my_df.loc[len(my_df)] = my_dic
Append another frame to your existing frame
col_names = ['A', 'B', 'C']
my_df2 = pd.DataFrame(columns = col_names)
my_df = my_df.append(my_df2)
Performance considerations
If you are adding rows inside a loop consider performance issues. For around the first 1000 records "my_df.loc" performance is better, but it gradually becomes slower by increasing the number of records in the loop.
If you plan to do thins inside a big loop (say 10M? records or so), you are better off using a mixture of these two; fill a dataframe with iloc until the size gets around 1000, then append it to the original dataframe, and empty the temp dataframe. This would boost your performance by around 10 times.
C:\xampp\mysql\bin\mysql -u root -p testdatabase < C:\Users\Juan\Desktop\databasebackup.sql
That worked for me to import 400MB file into my database.
In case of Request to a REST Service:
You need to allow the CORS (cross origin sharing of resources) on the endpoint of your REST Service with Spring annotation:
@CrossOrigin(origins = "http://localhost:8080")
Very good tutorial: https://spring.io/guides/gs/rest-service-cors/
create PROCEDURE for create custom code using template
create PROCEDURE [dbo].[createCode]
(
@TableName sysname = '',
@befor varchar(max)='public class @TableName
{',
@templet varchar(max)='
public @ColumnType @ColumnName { get; set; } // @ColumnDesc ',
@after varchar(max)='
}'
)
AS
BEGIN
declare @result varchar(max)
set @befor =replace(@befor,'@TableName',@TableName)
set @result=@befor
select @result = @result
+ replace(replace(replace(replace(replace(@templet,'@ColumnType',ColumnType) ,'@ColumnName',ColumnName) ,'@ColumnDesc',ColumnDesc),'@ISPK',ISPK),'@max_length',max_length)
from
(
select
column_id,
replace(col.name, ' ', '_') ColumnName,
typ.name as sqltype,
typ.max_length,
is_identity,
pkk.ISPK,
case typ.name
when 'bigint' then 'long'
when 'binary' then 'byte[]'
when 'bit' then 'bool'
when 'char' then 'String'
when 'date' then 'DateTime'
when 'datetime' then 'DateTime'
when 'datetime2' then 'DateTime'
when 'datetimeoffset' then 'DateTimeOffset'
when 'decimal' then 'decimal'
when 'float' then 'float'
when 'image' then 'byte[]'
when 'int' then 'int'
when 'money' then 'decimal'
when 'nchar' then 'char'
when 'ntext' then 'string'
when 'numeric' then 'decimal'
when 'nvarchar' then 'String'
when 'real' then 'double'
when 'smalldatetime' then 'DateTime'
when 'smallint' then 'short'
when 'smallmoney' then 'decimal'
when 'text' then 'String'
when 'time' then 'TimeSpan'
when 'timestamp' then 'DateTime'
when 'tinyint' then 'byte'
when 'uniqueidentifier' then 'Guid'
when 'varbinary' then 'byte[]'
when 'varchar' then 'string'
else 'UNKNOWN_' + typ.name
END + CASE WHEN col.is_nullable=1 AND typ.name NOT IN ('binary', 'varbinary', 'image', 'text', 'ntext', 'varchar', 'nvarchar', 'char', 'nchar') THEN '?' ELSE '' END ColumnType,
isnull(colDesc.colDesc,'') AS ColumnDesc
from sys.columns col
join sys.types typ on
col.system_type_id = typ.system_type_id AND col.user_type_id = typ.user_type_id
left join
(
SELECT c.name AS 'ColumnName', CASE WHEN dd.pk IS NULL THEN 'false' ELSE 'true' END ISPK
FROM sys.columns c
JOIN sys.tables t ON c.object_id = t.object_id
LEFT JOIN (SELECT K.COLUMN_NAME , C.CONSTRAINT_TYPE as pk
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS K
LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS C
ON K.TABLE_NAME = C.TABLE_NAME
AND K.CONSTRAINT_NAME = C.CONSTRAINT_NAME
AND K.CONSTRAINT_CATALOG = C.CONSTRAINT_CATALOG
AND K.CONSTRAINT_SCHEMA = C.CONSTRAINT_SCHEMA
WHERE K.TABLE_NAME = @TableName) as dd
ON dd.COLUMN_NAME = c.name
WHERE t.name = @TableName
) pkk on ColumnName=col.name
OUTER APPLY (
SELECT TOP 1 CAST(value AS NVARCHAR(max)) AS colDesc
FROM
sys.extended_properties
WHERE
major_id = col.object_id
AND
minor_id = COLUMNPROPERTY(major_id, col.name, 'ColumnId')
) colDesc
where object_id = object_id(@TableName)
) t
set @result=@result+@after
select @result
--print @result
END
now create custom code
for example c# class
exec [createCode] @TableName='book',@templet ='
public @ColumnType @ColumnName { get; set; } // @ColumnDesc '
output is
public class book
{
public long ID { get; set; } //
public String Title { get; set; } // Book Title
}
for LINQ
exec [createCode] @TableName='book'
, @befor ='[System.Data.Linq.Mapping.Table(Name = "@TableName")]
public class @TableName
{',
@templet ='
[System.Data.Linq.Mapping.Column(Name = "@ColumnName", IsPrimaryKey = @ISPK)]
public @ColumnType @ColumnName { get; set; } // @ColumnDesc
' ,
@after ='
}'
output is
[System.Data.Linq.Mapping.Table(Name = "book")]
public class book
{
[System.Data.Linq.Mapping.Column(Name = "ID", IsPrimaryKey = true)]
public long ID { get; set; } //
[System.Data.Linq.Mapping.Column(Name = "Title", IsPrimaryKey = false)]
public String Title { get; set; } // Book Title
}
for java class
exec [createCode] @TableName='book',@templet ='
public @ColumnType @ColumnName ; // @ColumnDesc
public @ColumnType get@ColumnName()
{
return this.@ColumnName;
}
public void set@ColumnName(@ColumnType @ColumnName)
{
this.@ColumnName=@ColumnName;
}
'
output is
public class book
{
public long ID ; //
public long getID()
{
return this.ID;
}
public void setID(long ID)
{
this.ID=ID;
}
public String Title ; // Book Title
public String getTitle()
{
return this.Title;
}
public void setTitle(String Title)
{
this.Title=Title;
}
}
for android sugarOrm model
exec [createCode] @TableName='book'
, @befor ='@Table(name = "@TableName")
public class @TableName
{',
@templet ='
@Column(name = "@ColumnName")
public @ColumnType @ColumnName ;// @ColumnDesc
' ,
@after ='
}'
output is
@Table(name = "book")
public class book
{
@Column(name = "ID")
public long ID ;//
@Column(name = "Title")
public String Title ;// Book Title
}
According to this link, it solved by entering this command:
export LC_ALL=C
Box Selecting
Windows & Linux: Shift + Alt + 'Mouse Left Button'
macOS: Shift + option + 'Click'
Esc to exit selection.
MacOS: Shift + Alt/Option + Command + 'arrow key'
var jArr = [
{
id : "001",
name : "apple",
category : "fruit",
color : "red"
},
{
id : "002",
name : "melon",
category : "fruit",
color : "green"
},
{
id : "003",
name : "banana",
category : "fruit",
color : "yellow"
}
]
var tableData = '<table><tr><td>Id</td><td>Name</td><td>Category</td><td>Color</td></tr>';
$.each(jArr, function(index, data) {
tableData += '<tr><td>'+data.id+'</td><td>'+data.name+'</td><td>'+data.category+'</td><td>'+data.color+'</td></tr>';
});
$('div').html(tableData);
In the comments of http://www.php.net/manual/de/function.mysql-db-name.php I found this one from ericpp % bigfoot.com:
If you just need the current database name, you can use MySQL's SELECT DATABASE() command:
<?php
function mysql_current_db() {
$r = mysql_query("SELECT DATABASE()") or die(mysql_error());
return mysql_result($r,0);
}
?>
Both approaches call a constructor, they just call different ones. This code:
var albumData = new Album
{
Name = "Albumius",
Artist = "Artistus",
Year = 2013
};
is syntactic shorthand for this equivalent code:
var albumData = new Album();
albumData.Name = "Albumius";
albumData.Artist = "Artistus";
albumData.Year = 2013;
The two are almost identical after compilation (close enough for nearly all intents and purposes). So if the parameterless constructor wasn't public:
public Album() { }
then you wouldn't be able to use the object initializer at all anyway. So the main question isn't which to use when initializing the object, but which constructor(s) the object exposes in the first place. If the object exposes two constructors (like the one in your example), then one can assume that both ways are equally valid for constructing an object.
Sometimes objects don't expose parameterless constructors because they require certain values for construction. Though in cases like that you can still use the initializer syntax for other values. For example, suppose you have these constructors on your object:
private Album() { }
public Album(string name)
{
this.Name = name;
}
Since the parameterless constructor is private, you can't use that. But you can use the other one and still make use of the initializer syntax:
var albumData = new Album("Albumius")
{
Artist = "Artistus",
Year = 2013
};
The post-compilation result would then be identical to:
var albumData = new Album("Albumius");
albumData.Artist = "Artistus";
albumData.Year = 2013;
if you're using the compiled bootstrap, one of the ways of fixing it is by editing the bootstrap.min.js before the line
$next[0].offsetWidth
force reflow Change to
if (typeof $next == 'object' && $next.length) $next[0].offsetWidth // force reflow
Likely you will not only need to split into train and test, but also cross validation to make sure your model generalizes. Here I am assuming 70% training data, 20% validation and 10% holdout/test data.
Check out the np.split:
If indices_or_sections is a 1-D array of sorted integers, the entries indicate where along axis the array is split. For example, [2, 3] would, for axis=0, result in
ary[:2] ary[2:3] ary[3:]
t, v, h = np.split(df.sample(frac=1, random_state=1), [int(0.7*len(df)), int(0.9*len(df))])
Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:
import os
def my_lock(f):
if os.name == "posix":
# Unix or OS X specific locking here
elif os.name == "nt":
# Windows specific locking here
else:
print "Unknown operating system, lock unavailable"
forward to nikobelia
For those who using jenkins to run the play book, I just added to my jenkins job before running the ansible-playbook the he environment variable ANSIBLE_HOST_KEY_CHECKING = False For instance this:
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook 'playbook.yml' \
--extra-vars="some vars..." \
--tags="tags_name..." -vv
you can use sweetalert.
import into your HTML:
<script src="https://cdn.jsdelivr.net/npm/sweetalert2@8"></script>
and to fire the alert:
Swal.fire({
title: 'Do you want to do this?',
text: "You won't be able to revert this!",
type: 'warning',
showCancelButton: true,
confirmButtonColor: '#3085d6',
cancelButtonColor: '#d33',
confirmButtonText: 'Yes, Do this!',
cancelButtonText: 'No'
}).then((result) => {
if (result.value) {
Swal.fire(
'Done!',
'This has been done.',
'success'
)
}
})
for more data visit sweetalert alert website
I resolved this issue for me. Initially I tried to do this:
git submodule add --branch master [URL] [PATH_TO_SUBMODULE]
As it turns out the specification of the --branch option should not be used if you want to clone the master branch. It throws this error:
fatal: Cannot force update the current branch.
Unable to checkout submodule '[PATH_TO_SUBMODULE]'
Every time you try to do a
git submodule sync
This error will be thrown:
No submodule mapping found in .gitmodules for path '[PATH_TO_SUBMODULE]'
And the lines needed in .gitmodules are never added.
So the solution for me was this:
git submodule add [URL] [PATH_TO_SUBMODULE]
I created a universal nvm that works on both Unix (bash) and Windows, base on another simple nvm.
It doesn't need admin on Windows, but requires PowerShell 4+ and the right to execute scripts.
Actually, in C++14 it can be done with a very few lines of code.
This is a very similar in idea to @Paul's solution. Due to things missing from C++11, that solution is a bit unnecessarily bloated (plus defining in std smells). Thanks to C++14 we can make it a lot more readable.
The key observation is that range-based for-loops work by relying on begin()
and end()
in order to acquire the range's iterators. Thanks to ADL, one doesn't even need to define their custom begin()
and end()
in the std:: namespace.
Here is a very simple-sample solution:
// -------------------------------------------------------------------
// --- Reversed iterable
template <typename T>
struct reversion_wrapper { T& iterable; };
template <typename T>
auto begin (reversion_wrapper<T> w) { return std::rbegin(w.iterable); }
template <typename T>
auto end (reversion_wrapper<T> w) { return std::rend(w.iterable); }
template <typename T>
reversion_wrapper<T> reverse (T&& iterable) { return { iterable }; }
This works like a charm, for instance:
template <typename T>
void print_iterable (std::ostream& out, const T& iterable)
{
for (auto&& element: iterable)
out << element << ',';
out << '\n';
}
int main (int, char**)
{
using namespace std;
// on prvalues
print_iterable(cout, reverse(initializer_list<int> { 1, 2, 3, 4, }));
// on const lvalue references
const list<int> ints_list { 1, 2, 3, 4, };
for (auto&& el: reverse(ints_list))
cout << el << ',';
cout << '\n';
// on mutable lvalue references
vector<int> ints_vec { 0, 0, 0, 0, };
size_t i = 0;
for (int& el: reverse(ints_vec))
el += i++;
print_iterable(cout, ints_vec);
print_iterable(cout, reverse(ints_vec));
return 0;
}
prints as expected
4,3,2,1,
4,3,2,1,
3,2,1,0,
0,1,2,3,
NOTE std::rbegin()
, std::rend()
, and std::make_reverse_iterator()
are not yet implemented in GCC-4.9. I write these examples according to the standard, but they would not compile in stable g++. Nevertheless, adding temporary stubs for these three functions is very easy. Here is a sample implementation, definitely not complete but works well enough for most cases:
// --------------------------------------------------
template <typename I>
reverse_iterator<I> make_reverse_iterator (I i)
{
return std::reverse_iterator<I> { i };
}
// --------------------------------------------------
template <typename T>
auto rbegin (T& iterable)
{
return make_reverse_iterator(iterable.end());
}
template <typename T>
auto rend (T& iterable)
{
return make_reverse_iterator(iterable.begin());
}
// const container variants
template <typename T>
auto rbegin (const T& iterable)
{
return make_reverse_iterator(iterable.end());
}
template <typename T>
auto rend (const T& iterable)
{
return make_reverse_iterator(iterable.begin());
}
You could use a variable to make the calculation and use toFixed
when you set the #diskamountUnit
element value:
var amount = $("#disk").slider("value") * 1.60;
$("#diskamountUnit").val('$' + amount.toFixed(2));
You can also do that in one step, in the val
method call but IMO the first way is more readable:
$("#diskamountUnit").val('$' + ($("#disk").slider("value") * 1.60).toFixed(2));
This solution work for td
's that need both border
and padding
for styling.
(Tested on Chrome 32, IE 11, Firefox 25)
CSS:
table {border-collapse: separate; border-spacing:0; } /* separate needed */
td { display: inline-block; width: 33% } /* Firefox need inline-block + width */
td { position: relative } /* needed to make td move */
td { left: 10px; } /* push all 10px */
td:first-child { left: 0px; } /* move back first 10px */
td:nth-child(3) { left: 20px; } /* push 3:rd another extra 10px */
/* to support older browsers we need a class on the td's we want to push
td.col1 { left: 0px; }
td.col2 { left: 10px; }
td.col3 { left: 20px; }
*/
HTML:
<table>
<tr>
<td class='col1'>Player</td>
<td class='col2'>Result</td>
<td class='col3'>Average</td>
</tr>
</table>
Updated 2016
Firefox now support it without inline-block
and a set width
table {border-collapse: separate; border-spacing:0; }_x000D_
td { position: relative; padding: 5px; }_x000D_
td { left: 10px; }_x000D_
td:first-child { left: 0px; }_x000D_
td:nth-child(3) { left: 20px; }_x000D_
td { border: 1px solid gray; }_x000D_
_x000D_
_x000D_
/* CSS table */_x000D_
.table {display: table; }_x000D_
.tr { display: table-row; }_x000D_
.td { display: table-cell; }_x000D_
_x000D_
.table { border-collapse: separate; border-spacing:0; }_x000D_
.td { position: relative; padding: 5px; }_x000D_
.td { left: 10px; }_x000D_
.td:first-child { left: 0px; }_x000D_
.td:nth-child(3) { left: 20px; }_x000D_
.td { border: 1px solid gray; }
_x000D_
<table>_x000D_
<tr>_x000D_
<td>Player</td>_x000D_
<td>Result</td>_x000D_
<td>Average</td>_x000D_
</tr>_x000D_
</table>_x000D_
_x000D_
<div class="table">_x000D_
<div class="tr">_x000D_
<div class="td">Player</div>_x000D_
<div class="td">Result</div>_x000D_
<div class="td">Average</div>_x000D_
</div>_x000D_
</div>
_x000D_
I agree that one shouldn't suppress warnings in classes or methods as one could overlook other, accidentally suppressed warnings. But IMHO it's absolutely reasonable to suppress a warning that affects only a single line of code.
@SuppressWarnings("unchecked")
Foo<Bar> mockFoo = mock(Foo.class);
Does m
really need to be a data.frame()
or will a matrix()
suffice?
m <- matrix(0, ncol = 30, nrow = 2)
You can wrap a data.frame()
around that if you need to:
m <- data.frame(m)
or all in one line: m <- data.frame(matrix(0, ncol = 30, nrow = 2))
Show All Markers with Google map
In these Methods store all Markers and automatically zoom to show all markers in google map.
// Declare the Markers List.
List<MarkerOptions> markerList;
private BitmapDescriptor vnrPoint,banPoint;
public void storeAllMarkers()
{
markerList=new ArrayList<>();
markerList.removeAll(markerList);
// latitude and longitude of Virudhunagar
double latitude1=9.587209;
double longitude1=77.951431;
vnrPoint=BitmapDescriptorFactory.fromResource(R.drawable.location_icon_1);
LatLng vnr = new LatLng(latitude1, longitude1);
MarkerOptions vnrMarker = new MarkerOptions();
vnrMarker.position(vnr);
vnrMarker.icon(vnrPoint);
markerList.add(vnrMarker);
// latitude and longitude of Bengaluru
double latitude2=12.972442;
double longitude2=77.580643;
banPoint=BitmapDescriptorFactory.fromResource(R.drawable.location_icon_2);
LatLng ban = new LatLng(latitude2, longitude2);
MarkerOptions bengalureMarker = new MarkerOptions();
bengalureMarker.position(ban);
bengalureMarker.icon(banPoint);
markerList.add(bengalureMarker);
// You can add any numbers of MarkerOptions like this.
showAllMarkers();
}
public void showAllMarkers()
{
LatLngBounds.Builder builder = new LatLngBounds.Builder();
for (MarkerOptions m : markerList) {
builder.include(m.getPosition());
}
LatLngBounds bounds = builder.build();
int width = getResources().getDisplayMetrics().widthPixels;
int height = getResources().getDisplayMetrics().heightPixels;
int padding = (int) (width * 0.30);
// Zoom and animate the google map to show all markers
CameraUpdate cu = CameraUpdateFactory.newLatLngBounds(bounds, width, height, padding);
googleMap.animateCamera(cu);
}
Foreign key and check constraints have the concept of being trusted or untrusted, as well as being enabled and disabled. See the MSDN page for ALTER TABLE
for full details.
WITH CHECK
is the default for adding new foreign key and check constraints, WITH NOCHECK
is the default for re-enabling disabled foreign key and check constraints. It's important to be aware of the difference.
Having said that, any apparently redundant statements generated by utilities are simply there for safety and/or ease of coding. Don't worry about them.
What about this: ^[1-9][0-9]*$
docker ps -s will show the size of running containers only.
To check the size of all containers use docker ps -as
I have created a function for this purpose.
function free_port() {
if [ -z $1 ]
then
echo no Port given
else
PORT=$1;
PID=$(sudo lsof -i :$PORT) # store the PID, that is using this port
if [ -z $PID ]
then
echo port: $PORT is already free.
else
sudo kill -9 $PID # kill the process, which frees the port
echo port: $PORT is now free.
fi
fi
}
free_port 80 # you need to change this port number
Copy & pasting this block of code in your terminal should free your desired port. Just remember to change the port number in last line.
Step 1: Go to
http://localhost/security/xamppsecurity.php
Step 2: Set/Modify your password.
Step 3: Open C:\xampp\phpMyAdmin\config.inc.php using a editor.
Step 4: Check the following lines:
$cfg['Servers'][$i]['user'] = 'root';
$cfg['Servers'][$i]['password'] = 'your_password';
// your_password = the password that you have set in Step 2.
Step 5: Make sure the following line is set to TRUE: $cfg['Servers'][$i]['AllowNoPassword'] = true;
Step 6: Save the file, Restart MySQL from XAMPP Control Panel
Step 7: Login into phpmyadmin with root & your password.
Note: If again the same error comes, check the security page:
http://localhost/security/index.php
It will say:
The MySQL admin user root has no longer no password SECURE
PhpMyAdmin password login is enabled. SECURE
Then Restart your system, the problem will be solved.
In case of spring boot is being used , just config this :
aplication.yml
logging:
level:
org.hibernate.SQL: DEBUG
org.hibernate.type: TRACE
aplication.properties
logging.level.org.hibernate.SQL=DEBUG
logging.level.org.hibernate.type=TRACE
and nothing more.
Your log will be something like this:
2020-12-07 | DEBUG | o.h.SQL:127 - insert into Employee (id, name, title, id) values (?, ?, ?, ?)
2020-12-07 | TRACE | o.h.t.d.s.BasicBinder:64 - binding parameter [1] as [VARCHAR] - [001]
2020-12-07 | TRACE | o.h.t.d.s.BasicBinder:64 - binding parameter [2] as [VARCHAR] - [John Smith]
2020-12-07 | TRACE | o.h.t.d.s.BasicBinder:52 - binding parameter [3] as [VARCHAR] - [null]
2020-12-07 | TRACE | o.h.t.d.s.BasicBinder:64 - binding parameter [4] as [BIGINT] - [1]
HTH
If you intend on having multiple hosts/database connections, the ~/.pgpass file is the way to go.
Steps:
vim ~/.pgpass
or similar. Input your information in the following format:
hostname:port:database:username:password
Do not add string quotes around your field values. You can also use * as a wildcard for your port/database fields.chmod 0600 ~/.pgpass
in order for it to not be silently ignored by psql.alias postygresy='psql --host hostname database_name -U username'
The values should match those that you inputted to the ~/.pgpass file.. ~/.bashrc
or similar. Note that if you have an export PGPASSWORD='' variable set, it will take precedence over the file.
Keyword float
:
<h1 style="text-align:left;float:left;">Title</h1>
<h2 style="text-align:right;float:right;">Context</h2>
<hr style="clear:both;"/>
Uninstall the plugins first. And then try this. This uninstaller worked like a charm (I didn't even uninstall 2015 myself, it did everything on its own)!
If you dont need to customize the default toString() function, another way is to override toString() method, which returns all attributes to be compared. then compare toString() output of two objects. I generated toString() method using IntelliJ IDEA IDE, which includes class name in the string.
public class Greeting {
private String greeting;
@Override
public boolean equals(Object obj) {
if (this == obj) return true;
return this.toString().equals(obj.toString());
}
@Override
public String toString() {
return "Greeting{" +
"greeting='" + greeting + '\'' +
'}';
}
}
I'm using Database First and when this error happened to me my solution was to force ProviderManifestToken="2005" in edmx file (making the models compatible with SQL Server 2005). Don't know if something similar is possible for Code First.
This technique solved my problem:
In parent form:
frmEmployee frm = new frmEmployee();
frm.MdiParent = this;
frm.Dock = DockStyle.Fill;
frm.Show();
In the child form (Load event):
this.WindowState = FormWindowState.Maximized;
Simple Trick with jQuery and CSS Like so:
JQuery:
$('input[value=""]').addClass('empty');
$('input').keyup(function(){
if( $(this).val() == ""){
$(this).addClass("empty");
}else{
$(this).removeClass("empty");
}
});
CSS:
input.empty:valid{
box-shadow: none;
background-image: none;
border: 1px solid #000;
}
input:invalid,
input:required {
box-shadow: 3px 1px 5px rgba(200, 0, 0, 0.85);
border: 1px solid rgb(200,0,0);
}
input:valid{
box-shadow: none;
border: 1px solid #0f0;
}
I have used this in quick and dirty situations:
// react render method:
render() {
return (
<div>
{ this.props.textOrHtml.indexOf('</') !== -1
? (
<div dangerouslySetInnerHTML={{__html: this.props.textOrHtml.replace(/(<? *script)/gi, 'illegalscript')}} >
</div>
)
: this.props.textOrHtml
}
</div>
)
}
You have to disable the sandbox for Groovy in your job configuration.
Currently this is not possible for multibranch projects where the groovy script comes from the scm. For more information see https://issues.jenkins-ci.org/browse/JENKINS-28178
You can use a window MAX() like this:
SELECT
*,
max_date = MAX(date) OVER (PARTITION BY group)
FROM table
to get max dates per group
alongside other data:
group date cash checks max_date
----- -------- ---- ------ --------
1 1/1/2013 0 0 1/3/2013
2 1/1/2013 0 800 1/1/2013
1 1/3/2013 0 700 1/3/2013
3 1/1/2013 0 600 1/5/2013
1 1/2/2013 0 400 1/3/2013
3 1/5/2013 0 200 1/5/2013
Using the above output as a derived table, you can then get only rows where date
matches max_date
:
SELECT
group,
date,
checks
FROM (
SELECT
*,
max_date = MAX(date) OVER (PARTITION BY group)
FROM table
) AS s
WHERE date = max_date
;
to get the desired result.
Basically, this is similar to @Twelfth's suggestion but avoids a join and may thus be more efficient.
You can try the method at SQL Fiddle.
Answers above helped greatly.
Here is the Swift version.
@IBOutlet weak var priceLabel: UILabel!
*.... lines of code later*
self.priceLabel.font = self.priceLabel.font.fontWithSize(22)
You could also just call the script from the terminal, outputting everything to a file, if that helps. This way:
$ /path/to/the/script.py > output.txt
This will overwrite the file. You can use >>
to append to it.
If you want errors to be logged in the file as well, use &>>
or &>
.
I did some performance testing for the following approaches:
select * from (
select a.*, ROWNUM rnum from (
<select statement with order by clause>
) a where rownum <= MAX_ROW
) where rnum >= MIN_ROW
select * from (
<select statement with order by clause>
) where myrow between MIN_ROW and MAX_ROW
select * from (
select statement, rownum as RN with order by clause
) where a.rn >= MIN_ROW and a.rn <= MAX_ROW
Table had 10 million records, sort was on an unindexed datetime row:
Selecting first 10 rows took:
Selecting rows between 100,000 and 100,010:
Selecting rows between 9,000,000 and 9,000,010:
I have also faced the same issue in recent past for me I have do the following commands one by one in terminal.
sudo npm uninstall -g angular-cli
sudo npm cache clean
After this run
ng -v
If still get angular-cli version 1.0.0-beta.2x.x then run the following command
which ng
It will show the ng path. Go to the path and if it is linked with any file remove the same the link and actual ng file. In my case the link is in /usr/bin/ng and actual path of ng file is /lib/node_modules/@angular/cli/bin/ng.
sudo rm -rf /lib/node_modules/@angular/cli/bin/ng
sudo rm -rf /usr/bin/ng
Next you need to install @angular/cli using
sudo npm install -g @angular/cli
Close all the terminal and run ng -v and you are on. May be it will help someone. Thanks :)
Linux based Tomcat6 should have /etc/tomcat6/tomcat6.conf
# System-wide configuration file for tomcat6 services
# This will be sourced by tomcat6 and any secondary service
# Values will be overridden by service-specific configuration
# files in /etc/sysconfig
#
# Use this one to change default values for all services
# Change the service specific ones to affect only one service
# (see, for instance, /etc/sysconfig/tomcat6)
#
# Where your java installation lives
#JAVA_HOME="/usr/lib/jvm/java-1.5.0"
# Where your tomcat installation lives
CATALINA_BASE="/usr/share/tomcat6"
...
The problem is that your regex is a string, but html
is bytes:
>>> type(html)
<class 'bytes'>
Since python doesn't know how those bytes are encoded, it throws an exception when you try to use a string regex on them.
You can either decode
the bytes to a string:
html = html.decode('ISO-8859-1') # encoding may vary!
title = re.findall(pattern, html) # no more error
Or use a bytes regex:
regex = rb'<title>(,+?)</title>'
# ^
In this particular context, you can get the encoding from the response headers:
with urllib.request.urlopen(url) as response:
encoding = response.info().get_param('charset', 'utf8')
html = response.read().decode(encoding)
See the urlopen
documentation for more details.
resultList = results.Where(x=>x.Id != 2).ToList();
There's a little Linq helper I like that's easy to implement and can make queries with "where not" conditions a little easier to read:
public static IEnumerable<T> ExceptWhere<T>(this IEnumerable<T> source, Predicate<T> predicate)
{
return source.Where(x=>!predicate(x));
}
//usage in above situation
resultList = results.ExceptWhere(x=>x.Id == 2).ToList();
Download this file :- (https://pypi.python.org/packages/1f/3b/ee6f354bcb1e28a7cd735be98f39ecf80554948284b41e9f7965951befa6/pyserial-3.2.1.tar.gz#md5=7142a421c8b35d2dac6c47c254db023d):
cd /opt
sudo tar -xvf ~/Downloads/pyserial-3.2.1.tar.gz -C .
cd /opt/pyserial-3.2.1
sudo python setup.py install
Actually it depends on your use case.
1) You want to protect your route from unauthorized users
If that is the case you can use the component called <Redirect />
and can implement the following logic:
import React from 'react'
import { Redirect } from 'react-router-dom'
const ProtectedComponent = () => {
if (authFails)
return <Redirect to='/login' />
}
return <div> My Protected Component </div>
}
Keep in mind that if you want <Redirect />
to work the way you expect, you should place it inside of your component's render method so that it should eventually be considered as a DOM element, otherwise it won't work.
2) You want to redirect after a certain action (let's say after creating an item)
In that case you can use history:
myFunction() {
addSomeStuff(data).then(() => {
this.props.history.push('/path')
}).catch((error) => {
console.log(error)
})
or
myFunction() {
addSomeStuff()
this.props.history.push('/path')
}
In order to have access to history, you can wrap your component with an HOC called withRouter
. When you wrap your component with it, it passes match
location
and history
props. For more detail please have a look at the official documentation for withRouter.
If your component is a child of a <Route />
component, i.e. if it is something like <Route path='/path' component={myComponent} />
, you don't have to wrap your component with withRouter
, because <Route />
passes match
, location
, and history
to its child.
3) Redirect after clicking some element
There are two options here. You can use history.push()
by passing it to an onClick
event:
<div onClick={this.props.history.push('/path')}> some stuff </div>
or you can use a <Link />
component:
<Link to='/path' > some stuff </Link>
I think the rule of thumb with this case is to try to use <Link />
first, I suppose especially because of performance.
The solutions here didn't work for me as I'm styling react components.
What worked though for the sidebar was
.sidebar{
position: sticky;
top: 0;
}
Hope this helps someone.
struct Bool {
int true;
int false;
}
int main() {
/* bool is a variable of data type – bool*/
struct Bool bool;
/*below I’m accessing struct members through variable –bool*/
bool = {1,0};
print("Student Name is: %s", bool.true);
return 0;
}
In my case, the padding was because of the sectionHeader and sectionFooter heights, where storyboard allowed me to change it to minimum 1. So in viewDidLoad method:
tableView.sectionHeaderHeight = 0
tableView.sectionFooterHeight = 0
No need of JQuery simply you can do
if(yourObject['email']){
// what if this property exists.
}
as with any value for email
will return you true
, if there is no such property or that property value is null
or undefined
will result to false
From here: http://www.anddev.org/working_with_files-t115.html
//Writing a file...
try {
// catches IOException below
final String TESTSTRING = new String("Hello Android");
/* We have to use the openFileOutput()-method
* the ActivityContext provides, to
* protect your file from others and
* This is done for security-reasons.
* We chose MODE_WORLD_READABLE, because
* we have nothing to hide in our file */
FileOutputStream fOut = openFileOutput("samplefile.txt",
MODE_PRIVATE);
OutputStreamWriter osw = new OutputStreamWriter(fOut);
// Write the string to the file
osw.write(TESTSTRING);
/* ensure that everything is
* really written out and close */
osw.flush();
osw.close();
//Reading the file back...
/* We have to use the openFileInput()-method
* the ActivityContext provides.
* Again for security reasons with
* openFileInput(...) */
FileInputStream fIn = openFileInput("samplefile.txt");
InputStreamReader isr = new InputStreamReader(fIn);
/* Prepare a char-Array that will
* hold the chars we read back in. */
char[] inputBuffer = new char[TESTSTRING.length()];
// Fill the Buffer with data from the file
isr.read(inputBuffer);
// Transform the chars to a String
String readString = new String(inputBuffer);
// Check if we read back the same chars that we had written out
boolean isTheSame = TESTSTRING.equals(readString);
Log.i("File Reading stuff", "success = " + isTheSame);
} catch (IOException ioe)
{ioe.printStackTrace();}
Be carefull NOT IN
is not an alias for <> ANY
, but for <> ALL
!
http://dev.mysql.com/doc/refman/5.0/en/any-in-some-subqueries.html
SELECT c FROM t1 LEFT JOIN t2 USING (c) WHERE t2.c IS NULL
cant' be replaced by
SELECT c FROM t1 WHERE c NOT IN (SELECT c FROM t2)
You must use
SELECT c FROM t1 WHERE c <> ANY (SELECT c FROM t2)
If you have an instance function (i.e. one that gets passed self) you can use self to get a reference to the class using self.__class__
For example in the code below tornado creates an instance to handle get requests, but we can get hold of the get_handler
class and use it to hold a riak client so we do not need to create one for every request.
import tornado.web
import riak
class get_handler(tornado.web.requestHandler):
riak_client = None
def post(self):
cls = self.__class__
if cls.riak_client is None:
cls.riak_client = riak.RiakClient(pb_port=8087, protocol='pbc')
# Additional code to send response to the request ...
Here is another way to do it. It's documented on the MySQL official website. https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html
In the spirit, it's using the same mechanic of @Trey Stout's answer. However, I find this one prettier and more readable.
insert_stmt = (
"INSERT INTO employees (emp_no, first_name, last_name, hire_date) "
"VALUES (%s, %s, %s, %s)"
)
data = (2, 'Jane', 'Doe', datetime.date(2012, 3, 23))
cursor.execute(insert_stmt, data)
And to better illustrate any need for variables:
NB: note the escape being done.
employee_id = 2
first_name = "Jane"
last_name = "Doe"
insert_stmt = (
"INSERT INTO employees (emp_no, first_name, last_name, hire_date) "
"VALUES (%s, %s, %s, %s)"
)
data = (employee_id, conn.escape_string(first_name), conn.escape_string(last_name), datetime.date(2012, 3, 23))
cursor.execute(insert_stmt, data)
In pure JS it will be much simpler
foo.onsubmit = e=> {
e.preventDefault();
fetch(foo.action,{method:'post', body: new FormData(foo)});
}
foo.onsubmit = e=> {
e.preventDefault();
fetch(foo.action,{method:'post', body: new FormData(foo)});
}
_x000D_
<form name="foo" action="form.php" method="POST" id="foo">
<label for="bar">A bar</label>
<input id="bar" name="bar" type="text" value="" />
<input type="submit" value="Send" />
</form>
_x000D_
For your first question try
Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS);
(available since API 8)
To access individual files in this directory use either File.list() or File.listFiles(). Seems that reporting download progress is only possible in notification, see here.
You're a little confused about how objects work in JavaScript. The object's reference is the value of the variable. There is no unserialized value. When you create an object, its structure is stored in memory and the variable it was assigned to holds a reference to that structure.
Even if what you're asking was provided in some sort of easy, native language construct it would still technically be cloning.
JavaScript is really just pass-by-value... it's just that the value passed might be a reference to something.
EDIT: http://jsfiddle.net/nCFGL/223/ My Example.
You should be able to like follows:
var pieData = [{
value: 30,
color: "#F38630",
label: 'Sleep',
labelColor: 'white',
labelFontSize: '16'
},
...
];
Include the Chart.js located at:
Flexbox Solution
Pros:
Cons:
The way it works is by always having flex-basis: auto on the element with content, and transitioning flex-grow and flex-shrink instead.
Edit: Improved JS Fiddle inspired by the Xbox One interface.
* {_x000D_
margin: 0;_x000D_
padding: 0;_x000D_
box-sizing: border-box;_x000D_
transition: 0.25s;_x000D_
font-family: monospace;_x000D_
}_x000D_
_x000D_
body {_x000D_
margin: 10px 0 0 10px;_x000D_
}_x000D_
_x000D_
.box {_x000D_
width: 150px;_x000D_
height: 150px;_x000D_
margin: 0 2px 10px 0;_x000D_
background: #2d333b;_x000D_
border: solid 10px #20262e;_x000D_
overflow: hidden;_x000D_
display: inline-flex;_x000D_
flex-direction: column;_x000D_
}_x000D_
_x000D_
.space {_x000D_
flex-basis: 100%;_x000D_
flex-grow: 1;_x000D_
flex-shrink: 0; _x000D_
}_x000D_
_x000D_
p {_x000D_
flex-basis: auto;_x000D_
flex-grow: 0;_x000D_
flex-shrink: 1;_x000D_
background: #20262e;_x000D_
padding: 10px;_x000D_
width: 100%;_x000D_
text-align: left;_x000D_
color: white;_x000D_
}_x000D_
_x000D_
.box:hover .space {_x000D_
flex-grow: 0;_x000D_
flex-shrink: 1;_x000D_
}_x000D_
_x000D_
.box:hover p {_x000D_
flex-grow: 1;_x000D_
flex-shrink: 0; _x000D_
}
_x000D_
<div class="box">_x000D_
<div class="space"></div>_x000D_
<p>_x000D_
Super Metroid Prime Fusion_x000D_
</p>_x000D_
</div>_x000D_
<div class="box">_x000D_
<div class="space"></div>_x000D_
<p>_x000D_
Resident Evil 2 Remake_x000D_
</p>_x000D_
</div>_x000D_
<div class="box">_x000D_
<div class="space"></div>_x000D_
<p>_x000D_
Yolo The Game_x000D_
</p>_x000D_
</div>_x000D_
<div class="box">_x000D_
<div class="space"></div>_x000D_
<p>_x000D_
Final Fantasy 7 Remake + All Additional DLC + Golden Tophat_x000D_
</p>_x000D_
</div>_x000D_
<div class="box">_x000D_
<div class="space"></div>_x000D_
<p>_x000D_
DerpVille_x000D_
</p>_x000D_
</div>
_x000D_
You can make this connection in interface builder.
In your storyboard, click the assistant editor at the top of the screen (two circles in the middle).
Ctrl + Click on the textfield in interface builder.
Drag from EditingChanged to inside your view controller class in the assistant view.
Name your function ("textDidChange" for example) and click connect.
These questions may be relevant to what you're asking for:
Here are my thoughts: You can stack up more than one call in your onclick event like this:
<select id="sel" onchange='alert("changed")'>
<option value='1'>One</option>
<option value='2'>Two</option>
<option value='3'>Three</option>
</select>
<input type="button" onclick='document.getElementById("sel").options[1].selected = true; alert("changed");' value="Change option to 2" />
You could also call a function to do this.
If you really want to call one function and have both behave the same way, I think something like this should work. It doesn't really follow the best practice of "Functions should do one thing and do it well", but it does allow you to call one function to handle both ways of changing the dropdown. Basically I pass (value) on the onchange event and (null, index of option) on the onclick event.
Here is the codepen: http://codepen.io/mmaynar1/pen/ZYJaaj
<select id="sel" onchange='doThisOnChange(this.value)'>
<option value='1'>One</option>
<option value='2'>Two</option>
<option value='3'>Three</option>
</select>
<input type="button" onclick='doThisOnChange(null,1);' value="Change option to 2"/>
<script>
doThisOnChange = function( value, optionIndex)
{
if ( optionIndex != null )
{
var option = document.getElementById( "sel" ).options[optionIndex];
option.selected = true;
value = option.value;
}
alert( "Do something with the value: " + value );
}
</script>
My guess is a wrong version of project A jar in your local maven repository. It seems that the dependency is resolved otherwise I think maven does not start compiling but usually these compiling error means that you have a version mix up. try to make a maven clean install
of your project A and see if it changes something for the project B...
Also a little more information on your setting could be useful:
For saving Current time to firebase database I use Unic Epoch Conversation:
let timestamp = NSDate().timeIntervalSince1970
and For Decoding Unix Epoch time to Date().
let myTimeInterval = TimeInterval(timestamp)
let time = NSDate(timeIntervalSince1970: TimeInterval(myTimeInterval))
In my case the error 1067 was caused with a specific version of Tomcat 7.0.96 32-bit in combination with AdoptOpenJDK. Spent two hours on it, un-installing, re-installing and trying different Java settings but Tomcat would not start. See... ASF Bugzilla – Bug 63625 seems to point at the issue though they refer to seeing a different error.
I tried 7.0.99 32-bit and it started straight away with the same AdoptOpenJDK 32-bit binary install.
you can use group and max:
db.getCollection('kids').aggregate([
{
$group: {
_id: null,
maxQuantity: {$max: "$age"}
}
}
])
For some reason using python3 I had to escape the "\"-sign
somestring.replace('\\n', '')
Hope this helps someone else!
Extending the accepted responses, when you are using JSON in a REST context...
There is a strong argument about using application/x-resource+json
and application/x-collection+json
when you are representing REST resources and collections.
And if you decide to follow the jsonapi specification, you should use of application/vnd.api+json
, as it is documented.
Altough there is not an universal standard, it is clear that the added semantic to the resources being transfered justify a more explicit Content-Type than just application/json
.
Following this reasoning, other contexts could justify a more specific Content-Type.
The Symfony project tries to keep its HTTP methods joined up with CRUD methods, and their list associates them as follows:
It's worth noting that, as they say on that page, "In reality, many modern browsers don't support the PUT and DELETE methods."
From what I remember, Symfony "fakes" PUT and DELETE for those browsers that don't support them when generating its forms, in order to try to be as close to using the theoretically-correct HTTP method even when a browser doesn't support it.
Here is a demo react_hooks_debug_print.html
in react hooks that is based on Chris's answer. The json data example is from https://json.org/example.html.
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>Hello World</title>
<script src="https://unpkg.com/react@16/umd/react.development.js"></script>
<script src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script>
<!-- Don't use this in production: -->
<script src="https://unpkg.com/[email protected]/babel.min.js"></script>
</head>
<body>
<div id="root"></div>
<script src="https://raw.githubusercontent.com/cassiozen/React-autobind/master/src/autoBind.js"></script>
<script type="text/babel">
let styles = {
root: { backgroundColor: '#1f4662', color: '#fff', fontSize: '12px', },
header: { backgroundColor: '#193549', padding: '5px 10px', fontFamily: 'monospace', color: '#ffc600', },
pre: { display: 'block', padding: '10px 30px', margin: '0', overflow: 'scroll', }
}
let data = {
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": [
"GML",
"XML"
]
},
"GlossSee": "markup"
}
}
}
}
}
const DebugPrint = () => {
const [show, setShow] = React.useState(false);
return (
<div key={1} style={styles.root}>
<div style={styles.header} onClick={ ()=>{setShow(!show)} }>
<strong>Debug</strong>
</div>
{ show
? (
<pre style={styles.pre}>
{JSON.stringify(data, null, 2) }
</pre>
)
: null
}
</div>
)
}
ReactDOM.render(
<DebugPrint data={data} />,
document.getElementById('root')
);
</script>
</body>
</html>
Or in the following way, add the style into header:
<style>
.root { background-color: #1f4662; color: #fff; fontSize: 12px; }
.header { background-color: #193549; padding: 5px 10px; fontFamily: monospace; color: #ffc600; }
.pre { display: block; padding: 10px 30px; margin: 0; overflow: scroll; }
</style>
And replace DebugPrint
with the follows:
const DebugPrint = () => {
// https://stackoverflow.com/questions/30765163/pretty-printing-json-with-react
const [show, setShow] = React.useState(false);
return (
<div key={1} className='root'>
<div className='header' onClick={ ()=>{setShow(!show)} }>
<strong>Debug</strong>
</div>
{ show
? (
<pre className='pre'>
{JSON.stringify(data, null, 2) }
</pre>
)
: null
}
</div>
)
}
To display a phone number with (###) ###-#### format, you can create a new HtmlHelper.
@Html.DisplayForPhone(item.Phone)
public static class HtmlHelperExtensions
{
public static HtmlString DisplayForPhone(this HtmlHelper helper, string phone)
{
if (phone == null)
{
return new HtmlString(string.Empty);
}
string formatted = phone;
if (phone.Length == 10)
{
formatted = $"({phone.Substring(0,3)}) {phone.Substring(3,3)}-{phone.Substring(6,4)}";
}
else if (phone.Length == 7)
{
formatted = $"{phone.Substring(0,3)}-{phone.Substring(3,4)}";
}
string s = $"<a href='tel:{phone}'>{formatted}</a>";
return new HtmlString(s);
}
}
Suppose https://www.mozilla.org/foo.html executes the following JavaScript:
const stateObj = { foo: 'bar' };
history.pushState(stateObj, '', 'bar.html');
This will cause the URL bar to display https://www.mozilla.org/bar2.html, but won't cause the browser to load bar2.html or even check that bar2.html exists.
A small improvement to @FishBoy's suggestion is to use the id projection, so you don't have to hard-code the identifier property name.
criteria.setProjection(Projections.distinct(Projections.id()));
public static void reverseString(String s){
System.out.println("---------");
for(int i=s.length()-1; i>=0;i--){
System.out.print(s.charAt(i));
}
System.out.println();
}
Just add water style="display:none";
to the <div>
Fiddles I say: http://jsfiddle.net/krY56/13/
jQuery:
function toggler(divId) {
$("#" + divId).toggle();
}
Preferred to have a CSS Class .hidden
.hidden {
display:none;
}
I've just solved this problem myself. I found the solution on MSDN: http://msdn.microsoft.com/en-us/library/ms155391.aspx.
The format basically is
http://<server>/reportserver?/<path>/<report>&rs:Command=Render&<parameter>=<value>
PHPSESSID is an auto generated session cookie by the server which contains a random long number which is given out by the server itself
<div *ngFor="let celeb of singers">
<p [ngClass]="{
'text-success':celeb.country === 'USA',
'text-secondary':celeb.country === 'Canada',
'text-danger':celeb.country === 'Puorto Rico',
'text-info':celeb.country === 'India'
}">{{ celeb.artist }} ({{ celeb.country }})
</p>
</div>
If you want a pure iterator solution for large strings with constant memory usage:
from typing import Iterable
import itertools
def ngrams_iter(input: str, ngram_size: int, token_regex=r"[^\s]+") -> Iterable[str]:
input_iters = [
map(lambda m: m.group(0), re.finditer(token_regex, input))
for n in range(ngram_size)
]
# Skip first words
for n in range(1, ngram_size): list(map(next, input_iters[n:]))
output_iter = itertools.starmap(
lambda *args: " ".join(args),
zip(*input_iters)
)
return output_iter
Test:
input = "If you want a pure iterator solution for large strings with constant memory usage"
list(ngrams_iter(input, 5))
Output:
['If you want a pure',
'you want a pure iterator',
'want a pure iterator solution',
'a pure iterator solution for',
'pure iterator solution for large',
'iterator solution for large strings',
'solution for large strings with',
'for large strings with constant',
'large strings with constant memory',
'strings with constant memory usage']
Going off of @Rok Kralj answer (best IMO) to check if any of needles exist in the haystack, you can use (bool)
instead of !!
which sometimes can be confusing during code review.
function in_array_any($needles, $haystack) {
return (bool)array_intersect($needles, $haystack);
}
echo in_array_any( array(3,9), array(5,8,3,1,2) ); // true, since 3 is present
echo in_array_any( array(4,9), array(5,8,3,1,2) ); // false, neither 4 nor 9 is present
I run some logs as per answers above and here is the output:
Starting Activity
On Activity Load (First Time)
————————————————————————————————————————————————
D/IndividualChatActivity: onCreate:
D/IndividualChatActivity: onStart:
D/IndividualChatActivity: onResume:
D/IndividualChatActivity: onPostResume:
Reload After BackPressed
————————————————————————————————————————————————
D/IndividualChatActivity: onCreate:
D/IndividualChatActivity: onStart:
D/IndividualChatActivity: onResume:
D/IndividualChatActivity: onPostResume:
OnMaximize(Circle Button)
————————————————————————————————————————————————
D/IndividualChatActivity: onRestart:
D/IndividualChatActivity: onStart:
D/IndividualChatActivity: onResume:
D/IndividualChatActivity: onPostResume:
OnMaximize(Square Button)
————————————————————————————————————————————————
D/IndividualChatActivity: onRestart:
D/IndividualChatActivity: onStart:
D/IndividualChatActivity: onResume:
D/IndividualChatActivity: onPostResume:
Stopping The Activity
On BackPressed
————————————————————————————————————————————————
D/IndividualChatActivity: onPause:
D/IndividualChatActivity: onStop:
D/IndividualChatActivity: onDestroy:
OnMinimize (Circle Button)
————————————————————————————————————————————————
D/IndividualChatActivity: onPause:
D/IndividualChatActivity: onStop:
OnMinimize (Square Button)
————————————————————————————————————————————————
D/IndividualChatActivity: onPause:
D/IndividualChatActivity: onStop:
Going To Another Activity
————————————————————————————————————————————————
D/IndividualChatActivity: onPause:
D/IndividualChatActivity: onStop:
Close The App
————————————————————————————————————————————————
D/IndividualChatActivity: onDestroy:
In my personal opinion only two are required onStart and onStop.
onResume seems to be in every instance of getting back, and onPause in every instance of leaving (except for closing the app).
The only operator overloading in Java is + on Strings (JLS 15.18.1 String Concatenation Operator +).
The community has been divided in 3 for years, 1/3 doesn't want it, 1/3 want it, and 1/3 doesn't care.
You can use unicode to create method names that are symbols... so if you have a symbol you want to use you could do myVal = x.$(y); where $ is the symbol and x is not a primitive... but that is going to be dodgy in some editors and is limiting since you cannot do it on a primitive.
I got this error while using malloc() to allocate some memory to a struct * after spending some this debugging the code, I finally used free() function to free the allocated memory and subsequently the error message gone :)
I would suggest the following:
String[] parsedInput = str.split("\n"); String firstName = parsedInput[0].split(": ")[1]; String lastName = parsedInput[1].split(": ")[1]; myMap.put(firstName,lastName);
Instead of
Object.values(myObject);
use
Object["values"](myObject);
In your example case:
const values = Object["values"](data).map(x => x.substr(0, x.length - 4));
This will hide the ts compiler error.
Shortest one liner
Change the UTC day from 6 to 5 if you want Array to start from Sunday.
const getWeekDays = (locale) => [...Array(7).keys()].map((v)=>new Date(Date.UTC(1970, 0, 6+v)).toLocaleDateString(locale, { weekday: 'long' }));
console.log(getWeekDays('de-DE'));
_x000D_